in Burma, the limits of Facebook’s measures against hate calls

Two complaints – in the United Kingdom and the United States – and damages estimated by the plaintiffs at more than 130 billion euros: in early December, NGOs representing Rohingya exiles lodged a complaint against Meta, the company mother of Facebook, accusing the mismanagement of hate messages by the social network of being the cause of thousands of deaths in Burma.

Since 2017, violent repression, mainly military, has targeted the Rohingya, Muslim minority in this predominantly Buddhist country; about 750,000 people fled Burma and at least 10,000 people were killed, according to a United Nations report published in September 2018, which estimated that “These crimes (…) were genocidal in nature ”.

But the Burmese army was not the only organization blamed by the United Nations: the same report accused Facebook, very popular in Burma, of having played a role. ” determining ” in the atrocities, by allowing calls to hatred against Muslims to proliferate. Weeks earlier, a damning Reuters investigation had shown platform moderation in the country to be more than flawed. Technical problems or insufficient number of Burmese-speaking moderators: many content calling for the killing of Rohingyas remained easily accessible and widely distributed.

Read also UN accuses Facebook of letting hate speech against Rohingya spread

New tools

Since then, Meta claims to have greatly strengthened its capacities in the country. “Our approach in Burma is fundamentally different today from what it was in 2017”, says a spokesperson for the company at World :

“The allegations accusing us of not having invested in security in the country are false. We have put together a team of Burmese-speaking employees, banned the Burmese military, put an end to networks that sought to manipulate public debate, and taken action against disinformation. “

These measures did not succeed in putting an end to the problems, as shown by the “Facebook Files”, these internal documents copied by the former employee Frances Haugen and transmitted to several editorial staff, including The world, by an employee of the United States Congress. Several documents, dating from mid-2020, show in particular that the automated tools put in place by Facebook to identify illegal messages seem insufficient.

A table summarizing the tools put in place in several countries shows that at the time, three years after the start of the massacres, Burma still does not have a classify for disinformation. The classifiers are machine learning tools that help Facebook detect problematic messages. Widely used by the social network around the world, they must be programmed for each language. A detection system has since been put in place for disinformation messages in Burmese, says the social network, in addition to others classifiers already existing, including one for hate calls.

You have 61.95% of this article left to read. The rest is for subscribers only.

source site-29