“The new model relies more and more on the responsibility of companies”

Tribune. Artificial intelligence (AI) systems present many risks for individuals today: they can invade privacy or be discriminatory, manipulative or even produce physical, psychological or economic harm. For example, AI systems can perpetuate social biases and discriminate against women or LGBT (lesbian, gay, bisexual and transgender) people in the workplace or social minorities in crime prevention contexts.

They can also manipulate the online behaviors of children, the elderly, or even other consumers, exploiting their vulnerabilities through advanced data analytics, and forcing them to make business (or even electoral decisions, as in the marketplace). Cambridge Analytica scandal) unwanted or unreasonable. At the same time, these systems can be obscure and therefore difficult to question.

Read also Artificial intelligence, a major issue for the EU

To address these issues, the European Commission published on April 21 a new proposal for the regulation of artificial intelligence, which must now be discussed and approved by the Council and the European Parliament. The proposal introduces a risk-based approach for AI-based products and services, with ambitious design rules and administrative burdens, but it does not add any new individual rights for consumers / citizens.

Prohibitions

Whereas in the past the emphasis was on the ‘notification and individual rights’ model, the new model increasingly relies on the empowerment of companies, based on technical and organizational measures aimed at mitigating risks. AI risks to humans, a supervisory authority controlling compliance.

In the proposed framework, certain AI systems are prohibited: “Dark patterns” (manipulative online ads) that cause non-economic harm to consumers or exploit their vulnerability due to their age or disability; social credit systems producing disproportionate or out of context detrimental effects; and biometric identification systems used by law enforcement in public spaces (when their use is not strictly necessary or when the risk of adverse effects is too high).

Article reserved for our subscribers Read also “The future developments of artificial intelligence require a strong reflection on the limits to be imposed”

Other AI systems are considered high risk (including recognition of faces and emotions, AI used in critical infrastructure, in the contexts of education, employment, emergency, asylum and borders, in social assistance, for credit assessment, by law enforcement or judges).

You have 55.78% of this article to read. The rest is for subscribers only.