81% think ChatGPT is a security risk, survey finds

An overwhelming majority of people responding to a Malwarebytes survey worry that ChatGPT is a security risk, with many also finding it deeply untrustworthy.

81% think ChatGPT is a security risk, survey finds

ChatGPT has been a polarizing invention, with responses to the artificial intelligence (AI) chatbot swinging between excitement and fear. Now, a new survey shows that disillusionment with ChatGPT could be hitting new highs.

According to a survey from security firm Malwarebytes, 81% of its respondents are worried about the security and safety risks posed by ChatGPT. It’s a remarkable finding and suggests that people are becoming increasingly concerned by the nefarious acts OpenAI’s chatbot is apparently capable of pulling off.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.Rolf van Root / Unsplash

Malwarebytes asked its newsletter subscribers to respond to the phrase “I am concerned about the possible security and/or safety risks posed by ChatGPT,” a sentiment with which 81% agreed. What’s more, 51% disagreed with the statement “ChatGPT and other AI tools will improve Internet safety” while just 7% agreed, suggesting there is widespread concern over the impact ChatGPT will have on online security.

The discontent with AI chatbots was not limited to security issues. Only 12% of surveyed individuals agreed with the phrase “The information produced by ChatGPT is accurate,” while 55% of people disagreed. As many as 63% of people did not trust ChatGPT’s responses, with a mere 10% finding them reliable.

Generating malware

A person using a laptop with a set of code seen on the display.Sora Shimazaki / Pexels

This kind of response is not entirely surprising, given the spate of high-profile bad acts ChatGPT has been used for in recent months. We’ve seen instances of it being deployed for all manner of questionable deeds, from writing malware to presenting users with free Windows 11 keys.

In May 2023, we spoke to various security experts about the threats posed by ChatGPT. According to Martin Zugec, the Technical Solutions Director at Bitdefender, “the quality of malware code produced by chatbots tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories.”

Still, that hasn’t stemmed public anxiety about what ChatGPT could be used to do. It’s clear that people are worried that even novice malware writers could task AI chatbots with dreaming up a devastating virus or unbreakable piece of ransomware, even if some security experts feel that’s unlikely.

Pause on development

A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.Viralyft / Unsplash

So, what can be done? When Malwarebytes asked its readers what they thought about the statement “Work on ChatGPT and other AI tools should be paused until regulations can catch up,” 52% agreed, while a little under 24% disagreed.

This call from the public joins several open letters from prominent tech leaders to pause AI chatbot development due to its “large-scale risks.” Perhaps it’s time decision-makers started to take heed.

Editors' Recommendations

ChatGPT: The latest updates on the world-changing AI chatbot I taught ChatGPT to teach me board games, and now I won’t ever go back This web browser integrates ChatGPT in a fascinating new way ChatGPT can now generate working Windows 11 keys for free GPT-4: how to use the AI chatbot that puts ChatGPT to shame

Alex Blake

In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…

Microsoft may have ignored warnings about Bing Chat’s unhinged responses

Bing Chat saying it wants to be human.

Microsoft's Bing Chat is in a much better place than it was when it released in February, but it's hard to overlook the issues the GPT-4-powered chatbot had when it released. It told us it wanted to be human, after all, and often broke down into unhinged responses. And according to a new report, Microsoft was warned about these types of responses and decided to release Bing Chat anyway.

According to the Wall Street Journal, OpenAI, the company behind ChatGPT and the GPT-4 model powering Bing Chat, warned Microsoft about integrating its early AI model into Bing Chat. Specifically, OpenAI flagged "inaccurate or bizarre" responses, which Microsoft seems to have ignored.

Read more

Apple CEO says he uses ChatGPT, weeks after Apple banned it

Tim Cook at WWDC 2022.

Apple had one of its largest WWDC keynotes in recent memory this year, with monumental announcements like the new Vision Pro headset. But one area where Apple was shockingly silent was AI, especially after the cascading rise of apps like ChatGPT.

Apple's CEO, Tim Cook, said the company is being patient with AI for now in an interview with Good Morning America. "I do think it's so important to be very deliberate and very thoughtful in the development and the deployment of [Large Language Models]," Cook said. "Because they can be so powerful that you worry about things like bias, things like misinformation, maybe worse in some cases."

Read more

These ingenious ideas could help make AI a little less evil

profile of head on computer chip artificial intelligence

Right now, there’s plenty of hand-wringing over the damage artificial intelligence (AI) can do. To offset that, Firefox maker Mozilla set out to encourage more accountable use of AI with its Responsible AI Challenge, and the recently announced winners of the contest show that the AI-infused future doesn’t have to be all doom and gloom.

The first prize of $50,000 went to Sanative AI, which “provides anti-AI watermarks to protect images and artwork from being used as training data” for the kind of large-language models that power AI tools like ChatGPT. There has been much consternation from photographers and artists over their work being used to train AI without permission, something Sanative AI could help to remedy.

Read more