ChatGPT-assisted bots now abound on social media


For many users, browsing social media news feeds and notifications is like wading through mud. For what ? Here is an answer. A new study identifies 1,140 AI-assisted bots spreading misinformation on X (formerly Twitter) on cryptocurrency and blockchain topics.

But bot accounts that post this type of content can be difficult to spot, researchers have found. Bot accounts use ChatGPT to generate their content and are difficult to differentiate from real accounts, which makes the practice even more dangerous for victims.

AI-powered bot accounts have profiles that look like real humans, with profile pictures and bios or descriptions about crypto and blockchain. They regularly post AI-generated posts, post stolen images, reply to posts, and retweet them.

The researchers found that the 1,140 Twitter bot accounts belonged to the same malicious social botnet, which they called “fox8”. Either a network of bot zombies, ie a network of accounts, controlled centrally by cybercriminals.

Generative AI robots are increasingly mimicking human behaviors. This means that traditional bot detection tools, such as Botometer, are now insufficient. In the study, these tools struggled to identify and differentiate bot-generated content from human-generated content. But one stood out: OpenAI’s AI classifier, which was able to identify some bot tweets.

How to spot bot accounts?

Bot accounts on Twitter exhibit similar behavior. They follow each other, use the same links and hashtags, post similar content, and even engage with each other.

Researchers combed through tweets from AI bot accounts and found 1,205 telltale tweets.

Of that total, 81% contained the same apologetic phrase:

“I’m sorry, but I cannot respond to this request because it violates OpenAI’s Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences.

The use of this phrase instead suggests that bots are instructed to generate harmful content that goes against OpenAI’s policies.

The remaining 19% used a variant of the language “As an AI language model”, with 12% of them specifically saying “As an AI language model, I can’t browse Twitter or access to specific tweets to provide answers.”

Another clue is the fact that 3% of the tweets posted by these robots link to one of the three websites (cryptnomics.org, fox8.news and globalconomics.news).

These sites look like normal news sites but have notable red flags, such as the fact that they were all registered around the same time, in February 2023, that they have pop-ups prompting users to install suspicious software, that they all seem to use the same WordPress theme and that their domains point to the same IP address.

Malicious bot accounts can use social media self-propagation techniques by posting links that contain malware or infectious content, exploiting and infecting a user’s contacts, stealing session cookies from users’ browsers, users and automating follow-up requests.


Source: “ZDNet.com”



Source link -97