Freedom of expression: OSCE team calls for rights to encryption and anonymity


The Organization for Security and Co-operation in Europe (OSCE) Representative for Media Freedom, Teresa Ribeiro, published a handbook on artificial intelligence (AI) and freedom of expression on Wednesday. In doing so, she wants to give the 57 participating states in Europe, North America and Asia guidelines on how they can find “multilateral solutions to the challenges in their shared information space”. The diplomat emphasizes: “In doing so, they must put human rights at the center of the development and use of AI for the selection and moderation of online content.”

The handling of the “immeasurable variety of information on the Internet” is no longer possible without machine learning technologies and other forms of AI, Ribeiro writes in the foreword. Algorithmic decision-making systems have advanced “to the most important tools for the design and communication of online content”. They suggested what content will be removed, take precedence, or to whom it will be distributed. The corresponding solutions would be “developed and deployed by a handful of online platforms – the gatekeepers to the digital world”.

According to the Portuguese, Facebook, Google, Twitter & Co. are “powerful companies that are able to shape political and public discourse”. There is no doubt that through their filter and recommendation systems they have “a direct and significant influence on global peace, stability and overall security”. “With such power comes responsibility,” demands Ribeiro. However, gatekeeper business practices are evolving at a pace “that exceeds any legal or regulatory framework for the use of AI to shape our online information space.” Society is thus “at a crossroads”.

In this area of ​​tension, the 13-strong team of authors of the handbook, after discussions and working sessions with around 120 other scientists and practitioners in the areas of freedom of expression, technology and security, makes a large number of recommendations that go far beyond the rules for trustworthy AI and algorithms. The protection of the privacy of the users also plays a large part.

“States should legally ensure anonymity and encryption,” is one of the demands in this area. This claim must also extend to “that opinions are not disclosed involuntarily”. The EU countries, on the other hand, passed a highly controversial resolution more than a year ago, with which they also want to guarantee “security despite encryption”. In essence, the government representatives are concerned with undermining effective cryptographic processes and services in the fight against online crime and giving security authorities access to plain text.

However, the handbook states: “Public authorities and law enforcement agencies in particular should have very limited and targeted access to data, restricted to specific identifiers or specific categories.”

Many appeals are aimed at preventing or at least clearly limiting the spying on users through online banners. “States should prohibit the indiscriminate mass collection and analysis of user data for targeted advertising that harms users individually or collectively or interferes with their right to freedom of thought and expression,” it says. It makes sense, on the basis of the precautionary principle, to clearly define to what extent advertising methods cause “damage” both individually and collectively or for democratic processes.

In this way, politicians can “set a threshold for the ban on harmful monitoring-based advertising practices,” the authors emphasize. Such steps should, for example, prohibit the use of sophisticated influencing techniques that acted like a weapon and attempted to exploit psychological weaknesses.

A similar call reads: “States should condition corporate surveillance, including targeted advertising that uses tracking and profiling, to human rights due diligence” and a record of compliance with relevant UN Guiding Principles. For all data collection and advertising-based business models, states would have to require human rights impact assessments in advance. At the same time, legislators could directly restrict the types of data that may be used specifically for advertising. The EU Parliament is aiming for this with the planned Digital Services Act (DSA).

In general, the authors urge transparency in AI: States should enforce principles such as “clarity, explainability and accessibility”. It is important to ensure that the protection of human rights “is not completely outsourced or automated”. Public-private partnerships must be clearly recognizable as such. “Robust legal redress mechanisms against censorship and surveillance powers” are needed, for example through human verification and independent appeal options. There must also be independent supervision. Above all, this should ensure that AI systems do not discriminate and respect human rights.


(bme)

To home page



Source link -64