WormGPT: That’s it, AI-powered phishing is here… and it’s going to hurt


Camille Coirault

July 18, 2023 at 9:20 a.m.

4

Hacker © Mikhail Nilov / Pexels

© Pexels

AI can be used for almost any purpose, even the most reprehensible. Among these, cybercrime is now concerned. The discovery of WormGPT, a specialized phishing tool based on AI, confirms this reality.

Discovered on underground forums, WormGPT could quickly become a major threat to Internet users. Imagined as an alternative program to the AI ​​models GPT (Generative Pre-trained Transformer, AIs that understand natural language), this tool was mainly designed by its creators for malicious use. Including phishing and theft of professional email addresses.

WormGPT, a potential new threat

This new kid could clearly turn into a cybersecurity nightmare. Not only because it is already effective, but also because it could serve as a model for developing even more powerful tools. WormGPT is therefore a generative AI based on the GPT-J-6B model, created in 2021. This model is similar to the GPT model. The “J”, which stands for “junior”, refers to its smaller size and less power. This new hacking tool was developed to facilitate the creation of personalized phishing emails. This bypasses a major hurdle in developing effective phishing campaigns, that of presenting compelling emails. Some are clearly not good candidates for this game. WormGPT would be likely to trick recipients more easily and make attacks more effective.

Researchers at SlashNext, a reputable cybersecurity company, have noticed a disturbing trend. There is renewed interest in specialized forums for generative AIs such as Google Bard or ChatGPT. These virtual meeting places allow budding hackers to advise each other on the manipulation of AI models for illegal purposes or to share malicious prompts among themselves (example in the screenshot below). A whole series of new challenges is emerging for cybersecurity experts.

Prompts WormsGPT © © Salshnext

© SlashNext

The regulation of generative AI in the fight against cybercrime

The problem with the emergence of tools like WormGPT is that setting up malicious actions online will soon be within the reach of (almost) anyone. No need to handle the code with expertise if an AI is able to generate it for you. No need either to understand how to play with this or that virtual security device if an AI is there to give you a turnkey methodology.

Opposite, legal AI publishers like Google or OpenAI are trying to strengthen their defenses and put in place anti-abuse measures. On this specific point, OpenAI is relatively up to date, which is apparently not yet the case for the Mountain View firm. This finding is still very problematic with regard to the speed of development of new tools by hackers.

This WormGPT would therefore make a rather effective hacking assistant, which makes it all the more worrying. Just as generative AIs facilitate many tasks, it is already able to soften the work of hackers. AI is a progress that affects everyone, almost without exception, and hackers are no exception. It is high time that the companies behind the creation of these AI models take drastic security measures, at the risk of seeing their own creations backfire.

Sources: The Hacker News, The Computer World



Source link -99