To make ChatGPT less ‘toxic’, OpenAI reportedly hires Kenyan workers paid $2 an hour


The underside of ChatGPT moderation. OpenAI is accused of using Sama, an AI company that allegedly employed underpaid Kenyan workers to develop its chatbot ethics, according to an investigation by the Time posted Wednesday, January 18.

Extreme texts

Like Facebook, Google, Meta or Microsoft, the developers of ChatGPT called on the company Sama to identify, classify and label offensive textual content. A labeling necessary for the proper learning of the chatbot, in order to filter out inappropriate textual corpora. A vital procedure to prevent the AI ​​from reproducing toxic remarks. However, to store knowledge, the model was trained on a large part of the information available on the Internet until 2021 (deep learning). The algorithms were thus able to recover textual data from the depths of the Web, to the most obscure pages.

Child sexual abuse, bestiality, murder, suicide, torture, self-harm, incest… Kenyan Sama workers should have read and understood extreme texts sent by OpenAI and labeled them for the Californian organization to be able to develop a detector built into ChatGPT.

It was torture

“It was torture. You are going to read a number of statements like this throughout the week. By the time we get to Friday, you are disturbed to have thought of this image”entrust to Time a Kenyan ex-worker from Sama, traumatized after reading a description of a man having sex with a dog in the presence of a young child. For employees mentally upset by this work, the company simply scheduled sessions with counselors in “welfare”.

Advertising, your content continues below

In his investigation, the Time also reveals the precariousness supposedly promoted by Sama in the context of the three contracts with OpenAI. Small hands at ChatGPT were reportedly paid between $1.32 and $2 an hour, depending on seniority and performance. However, the agreements between the two companies stipulated an hourly rate of $12.50.

A turbulent end to the collaboration

A collaboration that would have ended in a hurry after a new order from OpenAI. In February 2022, sexual and violent images were commissioned from Sama to feed an internal AI ethics tool. A first batch of 1400 images would have been delivered, of which several classified in category “C4” to “child sexual abuse”according to Time. The deal between the two companies quickly ended, with Sama citing “additional instructions” Referring to “some illegal categories” requested by OpenAI.

We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful releases. We never intended to collect any content from the C4 category. This content is not necessary to feed our pre-training filters and we ask our employees to actively avoid it“, reacted for its part the Californian startup.

Regarding the remuneration of Kenyan workers, OpenAI throws the ball back to Sama and says take “very seriously mental health” of its employees and its subcontractors. For its part, the company specializing in content moderation evokes wages between $ 1.46 and $ 3.74 per hour after taxes. “The $12.50 fee for the project covers all costs, such as infrastructure expenses, as well as salary and benefits for associates as well as fully dedicated QA analysts and team leaders”says a spokesperson.



Source link -98