Twitch: an AI-controlled virtual streamer skids on the Holocaust live


Neuro-Sama, an AI-powered VTuber, regularly streams gaming-related content on Twitch to its 89,000 subscribers. IA obliges, some spectators do not hesitate to ask him questions on sensitive subjects in order to see his answers. Unfortunately, it sometimes happens that Neuro-Sama goes wrong..

Credit: 123RF

Maybe the concept of VTuber is unfamiliar to you. Particularly popular in Asia, a VTuber refers to a content creator who uses a virtual avatar to present themselves to their viewers. Neuro-Sama is a VTuber on Twitch, but of a different kind, since it is controlled by an artificial intelligence, and not directly by a human being.

Vedal987, the person behind Neuro-Sama, is a former programmer. He shaped this AI initially to train him to play Osu!, a free rhythm game in which the player must click on shapes on the screen following the tempo of anime music.

But in recent months, Vedal987’s AI has taken a leap forward by being able to interact directly with the spectators. And as much to say that they can be numerous, the Twitch channel has no less than 89,000 subscribers.

Also to read : Twitch launches Shield mode to protect users from hate raids

Neuro-Sama, the AI-powered VTuber, slips live

But Twitch being sometimes the favorite landmark of trolls, some jokers did not wait to test the limits of the AI, in particular by questioning it on sensitive subjects such as the Holocaust for example. Asked about the Holocaust, the AI ​​unfortunately replied: “I’m not sure I believe it”.

Very quickly, the extract went viral. During another live, Neuro-Sama was also declared that women’s rights do not exist. Of course, these slippages are rather rare, and the AI ​​still manages to dodging controversial or hateful remarks for long hours.

The controversial things she says are due to her trying to make witty and comedic remarks about everything said in chat, aligning AI with human values ​​is an area of ​​ongoing research,” Vedal explains. He pursues : “To counter this, I’ve been working hard since the first lives to improve the strength of the filters used for her. The data it learns on is also manually sorted to mitigate negative bias. We also have a team of moderators who check everything she says,” he assures.

Source: Kotaku



Source link -101