On Amazon, scammers are being fooled by ChatGPT and it’s hilarious


It was thought that ChatGPT would spark a wave of larger-than-life scams on Amazon. Ultimately, the result is simply ridiculous. By searching for “OpenAI policy” on the merchant site, the latter displays hundreds of products whose title and description are in reality… an error message from OpenAI.

Black Friday Amazon

By launching ChatGPT, OpenAI gave scammers a golden gift on Amazon. Those who already flood the merchant site with fake reviews to misdirect users can now create fake products from scratch, without even having to come up with a name or write a description. Artificial intelligence would take care of all the work, then all you would have to do is find any image on the web, or even quickly run it through Photoshop, and that’s it.

That’s for theory. Because in practice, the result is much less terrifying than that. Elizabeth Lopatto, journalist for The Verge, had the idea of ​​typing “OpenAI policy” in the Amazon search bar, after a Threads user pointed out the appearance of strange products on the platform. Indeed, there is no shortage of results: dozens and dozens of products whose name is actually an error message written by OpenAI.

On the same subject — Amazon and Microsoft end phone support scam ring

You do not risk falling for these scams on Amazon

“I’m sorry, but I cannot comply with your request since it is against OpenAI’s policies.” This is the title displayed on many products currently on Amazon. Their descriptions are not much better. These promise mountains and wonders, in particular that the articles are capable of achieving “[tâche 1], [tâche 2] or [tâche 3]. » Sometimes, the description does not correspond at all to the product in question.

amazon chatgpt scamamazon chatgpt scam
Credits: The Verge

Most often, the images are poor Photoshop edits that struggle to pass for a real photo, while there is no review to praise the merits of the article. Enough for a good laugh then, since it’s hard to imagine someone being fooled by these very bad scams. These were likely generated by bots, which therefore did not take the trouble to reread and notice that their prompts were refused by ChatGPT’s security measures.

Please note, however, that we did not obtain any results on Amazon when trying to reproduce Elizabeth Lopatto’s experience, both on the French and American versions of the site. It is therefore not impossible that the e-commerce giant hastened to clean up after the publication of The Verge article.

Source: The Verge



Source link -101