AI and cybersecurity: Cybermalveillance’s call for calm


No revolution coming for the moment, but undoubtedly an increase in volumes to be processed and a worsening of the trends already observed. The Cybermalveillance public interest group returns to the issues linked to artificial intelligence in its threat report, which has just been published.

A welcome point after a year 2023 marked by the emergence of generative artificial intelligence. This technology has sparked widespread speculation that it could be taken over by cybercriminals. As Cybermalveillance points out, there are many tools that have been developed for cybercriminals, such as WormGPT, Fraud GPT or ThreatGPT.

“They didn’t wait for AI”

But for the moment this does not herald “an upheaval in the panorama of cyber threats”. The structure responsible for assisting individuals, businesses and communities remains cautious. If no case of malicious intent that can be attributed to an artificial intelligence tool has been recorded in France, this attribution “will, however, often remain difficult to determine”.

“Cybercriminals did not wait for AI, they use it to do things better or faster, without us identifying new threats,” explains Jean-Jacques Latour, in charge of expertise at the interest group. audience. Possible productivity gains that fit into a broader context.

Cybercrime has been democratizing for several years, being “increasingly easily accessible to new players with low technical skills, through specialized services marketed online”, notes the structure in its report.

Critical mind

However, even if they are not synonymous with new threats, tools based on artificial intelligence should allow sophisticated maneuvers. According to the South China Morning Post, an employee was the victim of sophisticated fraud, pushed into making a transfer of around twenty million euros. The employee had been deceived during a video conference where all the participants, except him, were simulated by the AI.

“AI will confuse our senses, we will have to learn to develop our critical sense,” observes Jean-Jacques Latour. But this type of tool should also “allow us to identify malicious things more quickly”.

Without, however, as Vincent Strubel recently reminded us, changing the paradigm here too. The Anssi boss had warned against “fear mongers”. And to point out the main issue of the subject, the security of AI-based tools.



Source link -97