A ChatGPT hallucination earns OpenAI legal action. Here is what happened


Generative AI models, such as ChatGPT, have been known to generate errors or “hallucinations”. This is why they usually come with a clearly displayed disclaimer to point out this problem.

But what would you do if, despite these warnings, you saw the AI ​​chatbot spreading misinformation about you?

Mark Walters, an American radio host, discovered that ChatGPT was spreading false information about him, accusing him of embezzling money. So he sued OpenAI in the company’s first defamation lawsuit, as Bloomberg Law reports.

A false summary

According to court documents, the misinformation began when Fred Riehl, editor of a firearms publication named AnmoLand, asked ChatGPT for a summary of a legal case (Second Amendment Foundation v. Ferguson) to support an article he was writing.

ChatGPT provided Riehl with a case summary which stated that Walters was accused of “defrauding and embezzling funds for personal expenses without authorization or reimbursement” and of “manipulating financial documents and bank statements to conceal his activities”. And this while he was the financial director and treasurer of an organization.

Problem, Mark Walters was never involved in this lawsuit. He was never accused of defrauding and embezzling funds, and never served as treasurer or chief financial officer.

Mr. Walters is therefore asking OpenAI to pay him damages.

The questions that arise in this lawsuit are: Who should be held liable? And are the website’s disclaimers regarding hallucinations enough to exclude liability even if someone is harmed?

The outcome of this lawsuit will therefore clearly have a significant impact on setting a standard in the field of generative AI.


Source: “ZDNet.com”



Source link -97