Why Meta, Reddit and Google are accused of complicity in a mass shooting


Vincent Mannessier

July 17, 2023 at 1:45 p.m.

0

justice © © Gorodenkoff / Adobe Stock

© Gorodenkoff / Adobe Stock

Several families of victims of a mass shooting in the United States are filing a complaint against these platforms.

These complaints hold that the platforms’ safety rules are largely flawed and dangerous, and partially responsible for a racist shooting that took place at a convenience store in Buffalo, New York, last year. The shooter, who admits to having become radicalized online, used several social platforms before and during his murderous raid, without really being worried by any moderation.

The terms of the trial

Just over a year ago, a young white man drove more than 300 miles to kill ten people at a Buffalo convenience store, after Google-searching the city’s predominantly black neighborhoods. The man had planned his action on Discord for months, and broadcast it live on Twitch.

In the statement of the complaint, which therefore accuses Meta, Instagram, Reddit, Google, YouTube, Snapchat, 4Chan, Twitch and Discord, the complainants recall that “Gendron [le nom du tireur]by his own admission, was not a racist before becoming a social media addict and being drawn into a psychological vortex by failing social media […] and receiving a constant stream of racist and white supremacist propaganda”. It is thus the racist theory of the Great Replacement which would have pushed Gendron to take action, which is not a first on social networks. To add to the horror, the mother of one of the victims was allegedly tagged in one of the videos of the massively shared online massacre.

The plaintiffs are therefore asking the companies concerned for damages, but also and above all for a profound change in their content security policies.

Facebook fake news © Shutterstock.com

© Shutterstock

The failures of social networks

The proliferation of misinformation and conspiracy theories is not exactly new to social media, as the period of lockdowns has vividly demonstrated. But it was already largely clear before, and many of the perpetrators of mass killings are those who had become radicalized there, even had posted a plan or a manifesto online, often on 4chan, before taking action.

Thus, in 2019, a researcher employed by Facebook discovered that profiles considered conservative could be exposed to conspiratorial content after only two days on the platform. Blame it on its algorithm, which locks users into increasingly specific content based on their interests and political alignment. Remember, moreover, that Meta is accused of complicity in not one but two trials for crimes against humanity, precisely for this reason.

Reddit, which has long been a perfect playground for conspiracies and extremists, has taken important steps on the issue in recent years, while remaining in an uncomfortable in-between in the name of freedom of expression.

For her part, a Snapchat spokeswoman explained that the platform “don’t let unmoderated content go viral by algorithm. On the contrary, we moderate all content before it can reach a large audience, which limits the chance of discovering dangerous content”. A YouTube representative also reacted: “For years, we have invested in technology, teams and policies to identify and remove extremist content. We regularly engage with law enforcement and civil society to share information and best practices. »

Sources: Gizmodo, PA



Source link -99