Congress forces tech companies to be transparent

After months of pleading that they hand over documents voluntarily, an American parliamentary commission is tightening the stance on tech companies. Under threat of punishment, social networks must now deliver documents that show the extent to which conspiracy theories and calls for violence contributed to the storming of the Capitol.

Did they allow themselves to be incited to violence on social networks? Trump supporters storm the Capitol, January 6, 2021.

John Minchillo/AP

The parliamentary commission of inquiry into the storming of the Capitol has requested internal information from operators of social networks – under threat of punishment. The meta platforms Facebook and Instagram, Alphabet subsidiary Youtube, Twitter and Reddit are affected. Even after months of requests, the companies had not provided sufficient information, justified the House of Representatives commission on Thursday the coercive measure.

The companies must now provide documents and experts within the next two weeks in order to answer the questions of the investigative commission. If they continue to provide incomplete or evasive answers, they could end up in a dock.

The commission is investigating the extent to which misinformation, conspiracy theories and calls for violence were spread on the platforms in connection with the storming of the Capitol. The Commission also wants to know what steps the companies have taken to avoid acting as “breeding grounds for radicalization”.

in the letter Commission chairman Bennie G. Thompson (Democratic Party) wrote to Meta boss Mark Zuckerberg that, despite three requests, Facebook had provided insufficient information on how hate messages, calls for violence, conspiracy theories and false information had been spread. The Commission is now demanding extensive insight into the moderation of content on Facebook.

in the Letter to Twitter and in that to the Google group Alphabet The Commission complains that the companies have so far hid the reasons for which they blocked ex-President Donald Trump’s accounts. Documents on this should give the public an insight into the companies’ internal balance of interests between freedom of speech in the digital space, the obligation to prevent violence and the platform’s domiciliary right to be able to decide for themselves who is registered with it.

Who is liable for content?

The coercive measure comes at a bad time for tech companies. For several months, American politicians have been increasingly debating a central part of the existing tech regulation, the “Section 230”.

The 1996 law denies social networks responsibility for comments and content that users spread on the platforms. An example: If user A accuses user B of sexually harassing her in a Facebook post, user B can accuse user A of defamation, but thanks to Section 230 Facebook is not liable for spreading the accusation.

Critics of the rule say that it allows companies to absolve themselves of responsibility for spreading conspiracy theories that divide society and thus sabotage the democratic culture of debate.

However, the companies themselves see this differently. Facebook, Twitter and YouTube repeatedly emphasize that they take consistent action against hate and disinformation on their platforms. However, they lack the incentive to increasingly delete or restrict extreme content. Finally, users stay online longer when they consume polarizing content, which is what platforms earn more from. More and more politicians are therefore taking the position that the self-regulation of the networks is no longer sufficient. The coercive measure taken by the investigative commission should reinforce their opinion.

Meanwhile, the tech companies are reacting within the bounds of what is to be expected. Facebook Group Meta wrote the “New York Times” that the documents requested by the commission of inquiry had been handed over and that they would continue to do so. Alphabet also wrote that those responsible had worked with the commission and that YouTube had strict rules against the distribution of content that incited violence or undermined trust in democratic elections.

source site-111