Why malicious hackers are interested in ChatGPT

According to the cybersecurity company Check Point, a wide range of malicious actors are trying to use the conversational artificial intelligence program ChatGPT, launched by the company OpenAI, on the sly.

“We are seeing Russian hackers trying to circumvent the regional restrictions put in place around ChatGPT,” Pete Nicoletti, one of Check Point’s IT security managers, warned a small gathering of journalists at a company event in New York.

Requests from Russia are blocked

Check Point official refers to ChatGPT Application Programming Interface access restrictions. The latter must block requests from Russia.

But according to him, these attempts to circumvent the restrictions of use are only one example among others. “ChatGPT will be used by good actors and bad actors,” he summed up.

For example, he noticed on the Reddit forum the exploit attempt that appeared under the name DAN, for “do anything now”. This involves using the chat prompt to manipulate ChatGPT so that it produces text that escapes the safeguards put in place by its designers, yet intended to prevent it from producing certain texts, for example hate speech.

Mass targeting for phishing

For Pete Nicoletti, ChatGPT should eventually allow malicious hackers to develop improved forms of phishing attacks.

“They will be extremely targeted, because attackers will be able to make this type of attack relevant to each victim,” he predicts, believing that this should allow for a kind of mass targeting.

The company had already reported the first attempts by malicious hackers with the conversational artificial intelligence program a month ago. Thus, on a forum, a hacker had explained at the end of December how he used ChatGPT to recreate strains of malicious programs such as infostealers or encryptors.

Source link -97