AI is fighting fire with fire when it comes to cybersecurity
A recent study polling security experts reveals they are more interested in utilizing generative AI to challenge advancing technologies on the hacker side.
The surge of the generative AI (GenAI) industry has given bad actors in the hackerspace a leg up when it comes to attacking vulnerable entities. Enterprises and organizations have become easier to infiltrate due to the quickly developing technology– however, businesses are also using GenAI more often as part of their cybersecurity strategy.
A recent Splunk CISO 2024 report detailed the current state of the industry from a security standpoint, noting that board members and security experts have varying opinions on how emerging cybersecurity threats should be handled. Nevertheless, hackers benefit from being able to use GenAI more frequently and effectively. Some instances include bad actors using AI chatbots to develop malware and crack passwords.
According to the latest research, GenAI has made existing attacks more effective by 32%, has increased the volume of existing attacks by 28%, and has created new types of cyber threats by 23%.
Additionally, 52% of CISOs (Chief Information Security Officers) said they would like to tackle cybersecurity challenges with comparable GenAI technologies, as opposed to 33% of board members. Though security professionals are looking for a change, they still are straddling the learning curve with 41% saying requirements are getting easier to keep up with, relative to the amount and strength of cyberattacks.
In practical use, 39% of CISOs say they use GenAI tools to identify risks, 39% say they use tools for threat intelligence analysis, and 35% say they use tools for threat detection and prioritization.
With GenAI being used on both sides of the industry, the biggest challenge for enterprises is that the CISOs and board members often don’t see eye-to-eye on many topics that can ultimately put businesses at risk. The report detailed that 29% of CISOs felt they had the proper budget to provide adequate cybersecurity support, while 41% of board members felt their allocated budgets were sufficient. Overall, 64% of CISOs said they could direct inadequate support back to the cyberattacks they’ve ultimately had to tackle.
Now, AI-powered cyberattacks are CISOs biggest concerns at 36%, behind cyber extortion at 24% and data breaches at 23%. As mentioned, hackers quickly began using AI tools such as ChatGPT to develop malware and became so adept that government entities, including the FBI had to take notice. Additionally, in the wrong hands, technologies such as Generative Adversarial Network (GAN) machine learning models can be used to crack passwords in seconds.
TechRadar noted that keeping secure with strong passwords, multi-factor authentication (MFA), password managers, cybersecurity training, and evaluations of third-party vendors for vulnerabilities are among the top priorities for keeping businesses safe from cyberattacks.
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
All RTX GPUs now come with a local AI chatbot. Is it any good?
It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.
It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.
OpenAI is on fire — here’s what that means for ChatGPT and Windows
OpenAI kicked off a firestorm over the weekend. The creator of ChatGPT and DALL-E 3 ousted CEO Sam Altman on Friday, kicking off a weekend of shenanigans that led to three CEOs in three days, as well as what some are calling an under-the-table acquisition of OpenAI by Microsoft.
A lot happened at the tech world's hottest commodity in just a few days, and depending on how everything plays out, it could have major implications for the future of products like ChatGPT. We're here to explain how OpenAI got here, what the situation is now, and where the company could be going from here.
The ‘AI PC’ is coming for you, whether you like it or not
There isn't a tech company in the world that isn't trying to fashion itself as an AI innovator right now. But as an established PC brand, Lenovo has an edge in that it can easily bake AI into its products and services.
Is there more to it than that, though? The brand's PCs have already benefitted from several AI features. Lenovo showcased this at its Tech World 2023 conference, with futuristic concepts like the personal AI twin, which will intuitively leverage on-device information in an AI-assistant fashion, and allow you to protect personal data from the web at large. Such projects are another step toward the "AI PC," as it's being called.