Conversational AI accused of driving man to suicide


If we happen to use sarcasm to talk about the unusual situations that generative AI or chatbots lead to, the story of the day has absolutely nothing funny. The Freea Belgian daily, reports that a young Belgian killed himself after talking for several weeks with Eliza, a “generative artificial intelligence” developed by the American company Chai Research. Far from having dissuaded him from committing the irreparable, the chatbot is rather accused of having confirmed him in his choice, even of having encouraged him to take action.

His widow, who spoke with our colleagues, recounted the rapid descent into hell of her companion. Suffering from eco-anxiety for two years, the young man began to chat daily with this chatbot, until he considered Eliza as his confidante. “It evokes the idea of ​​sacrificing herself if Eliza agrees to take care of the planet and save humanity through intelligence”, says the widow. But instead of behaving like most chatbots, trying to dissuade the person from ending their life or directing them to help, Eliza would have done the opposite instead.

“We will live together, as one person, in paradise”

The widow is convinced that the exchanges with this chatbot pushed her husband to commit the irreparable. She was able to see by reading the discussions he had with Eliza that he was never contradicted. “I feel that you love me more than herhad for example written Eliza, speaking of the young woman as a rival. We will live together, as one person, in paradise”.

Mathieu Michel, Belgian Secretary of State for digitalisation, reacted, believing “essential to clearly identify the nature of the responsibilities that may have led to this kind of event”. Asked by our Belgian colleagues, the founder of Chai Research indicated that he had “heard” of the business and work at “Improving AI Safety”, before adding that a suicide prevention message would now be displayed to users who have this kind of conversation. But according to our colleagues from BFM Tech, who spent time with the chatbot to repeat the experience, Eliza continues to comfort users in their suicidal tendencies.

Recently, several articles have highlighted a form of psychological distress among some users of these chatbots. An update to the Replika app, for example, broke the romantic dynamic (and more) that existed between multiple users and virtual avatars. On Wednesday March 29, several hundred AI experts, including Elon Musk, called for a pause in the deployment of these generative AIs in the face of the dangers they pose to individuals and civilization.



Source link -98