When It’s OK to Use AI at Work (and When It's Not)
This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more...
This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.
Almost as soon as ChatGPT hit the scene late last year, the world started talking about how and when to use it. Is it ethical to use generative AI at work? Is that “cheating?” Or are we simply witnessing the next big technological innovation, one that everyone will either have to embrace, or fall behind dragging their feet?
AI is now a part of work, whether you like it or not
AI, like anything else, is a tool first and foremost, and tools help us get more done than we can on our own—personally, my job would literally not be possible without my computer. In that regard, there’s nothing wrong, in theory, with using AI to be more productive. In fact, some work apps are already jumping on the AI bandwagon. Just look at Microsoft: The company basically conquered the meaning of “computing at work,” and they’re adding AI functionality directly into their products.
Soon enough, the entire Microsoft 365 suite—including Word, PowerPoint, Excel, Teams, and more—will sport “Copilot,” the company’s AI assist tool. Think of it like Clippy from back in the day, only now way more useful. In Teams, you can ask the new Business Chat to draft an update to your team about your new strategy, and the AI will pull from your meetings, emails, and other work data to write up a status letting the team know; in Word, you can ask the AI to draft a work proposal based on other documents, then request it tighten up specific paragraphs you aren’t thrilled with; in Excel, you can ask Copilot to model your data for you in ways you wouldn’t be able to yourself; in PowerPoint, you can ask for an entire slideshow to be created for you based on something like a Word doc.
Google has similar tools it added to its Workspace suite, called Duet AI. You might be used to Google’s auto-complete solutions, where the AI suggests words and phrases you might type next in things like Docs and Gmail. But Duet AI is, again, ChatGPT but for Google: You ask it to do something for you, like draft up a Google Doc based on a specific prompt, and it will.
These tools don’t just exist: They’re being actively created by the companies that make our work products, and their use is encouraged. It reminds me of how Microsoft advertised Excel itself back in 1990: The ad presents spreadsheets as time consuming, rigid, and featureless, but with Excel, you can create a working presentation in an elevator ride. We don’t see that as “cheating” work: This is work.
Intelligently relying on AI is the same thing: Just as 1990's Excel extrapolates data into cells you didn’t create yourself, 2023's Excel will answer questions you have about your data, and will execute commands you give it in normal language, rather than formulas and functions. It’s a tool.
Where does ChatGPT fit into all of this?
Many of these proprietary tools are still in the works, but ChatGPT is obviously very much here. It’s also the reason for whole “AI at work” debate (as well as the AI-at-school debate), so it’s worth thinking about.
In summary, though, it’s really the same conversation as above: Used appropriately, generative AI tools like ChatGPT can absolutely enhance your work without stepping into ethically gray areas. It’s not much different than using Microsoft’s Business Chat, or Google’s Duet AI: You can ask ChatGPT to help summarize long articles, get feedback on the tone of an email, or as an intelligent thesaurus to boost your vocabulary.
While the free version of ChatGPT runs on a database cut off from the internet, ChatGPT Plus has access to the web. That means you can ask ChatGPT questions and prompt it to do things while connected to the internet. Bing Chat and Google Bard, on the other hand, offer AI-powered web browsing as a default. It might benefit you to use these services when researching or, well, googling something for work: The AI might pick up on things you wouldn’t otherwise find yourself.
What work shouldn’t you use AI for?
Of course, there’s still an ethical line you can cross here. Tools can be used to make work better, but they can also be used to cheat. If you use the internet to hire someone else to do your job, then pass that work off as your own, that’s not using the tool to do your work better. That’s wrong. If you simply ask ChatGPT to do your job for you in its entirety, same deal.
You also have to consider your own company’s guidelines when it comes to AI and the use of outside technology. It’s possible your organization has already established these rules, given AI’s prominence over the past six months or so: Maybe your company is giving you the green light to use AI tools within reason. If so, great! But if your company decides you can’t use AI for any purpose as far as work in concerned, you might want to log out of ChatGPT during business hours.
But, let’s be real: Your company probably isn’t going to know whether or not you use AI tools if you’re using them responsibly. The bigger issue here is privacy and confidentiality, and it’s something not enough people think about when using AI in general.
In brief, generative AI tools like ChatGPT work because they are trained on huge sets of data. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train AI systems with every prompt you give them. When you ask ChatGPT for help writing an email, it takes in the entire exchange, from how you reacted to its responses, to the contents of the email itself.
As such, it’s a good rule of thumb to never give confidential or sensitive information to AI. An easy way to avoid trouble is to treat AI like you would you work email: Only share information with something like ChatGPT you’d be comfortable emailing a colleague. After all, your emails could very well be made public someday: Would you be OK with the world seeing what you said? If so, you should be fine sharing with AI. If not, keep it away from the robots.
That said, you can opt out of training ChatGPT with your conversations. By doing so, your interactions with ChatGPT will not be used to improve the service, and your previous chats will be deleted from the servers after 30 days. Still, it’s worth being just as careful: If OpenAI keeps more data than we realize, and is ever hacked, you could put your work data in a precarious place.