No, ChatGPT is not the perfect tool to write malicious code


Since its launch this winter, ChatGPT, OpenAI’s formidable conversational artificial intelligence, has generated enormous enthusiasm. Sometimes even too much, as evidenced by the concern of these Check Point researchers who were alarmed by the experiments of cybercriminals carried out with the chatbot. Their fear? Let ChatGPT become the tool that would open up malware programming to a wider, less skilled audience.

Fortunately, renowned security researcher Marcus Hutchins recently denied his fears in a blog post. This young Englishman knows what he is talking about: he started out in malware programming before coming back to the right path. He then became a global hero after stopping the terrible self-replicating malware WannaCry.

Confused when there are more parameters

So, for the security researcher, ChatGPT is far from capable of creating fully functional malware.

“If you ask ChatGPT to generate a snippet of Python code to load a web page, it can do that,” notes Marcus Hutchins. “If you ask it to generate a file encryptor, it can probably do that too. But the more parameters you add, the more confusing it gets. »

The security researcher also notes that the character limit in the dialog prevents entering enough data to generate anything beyond snippets that can already be found on the Google search engine. And yes, perhaps the chatbot could save some time for an experienced coder. But a neophyte would have had a hard time identifying the code errors written by the chatbot. Which means that it will probably not be the right tool to democratize the programming of malicious programs.

Not just code

For Marcus Hutchins, this poor analysis of ChatGPT’s real capabilities is fueled by a more general misunderstanding about computer programming. It is not only a question of writing code, but first of all of understanding how to arrive at a goal that one has set for oneself. In short, it is not only lines that follow one another, but also an architecture to imagine.

Beyond computer programming, the security researcher was really not convinced by the prospects that conversational artificial intelligence could offer to cybercriminals. Thus, about the generation of phishing e-mails, the young Englishman quips by recalling that Google already launched a sophisticated artificial intelligence service seven years ago with its famous online translator.

“Writing phishing emails has never been difficult and doesn’t require artificial intelligence,” he recalls. After all, can’t these be constructed very simply by copying the HTML code of an authentic message? No need for a chatbot to copy and paste.





Source link -97