EU court: some documents on “lie detector” have to be surrendered


Partial success for MEP Patrick Breyer in his lawsuit against the European Executive Agency for Research: The EU court ruled that the member of the Pirate Party was right that the authority working for the EU Commission no longer had large-scale public access to documents to the project iBorderCtrl with the blanket reference to business secrets. The dispute is about tests for an “intelligent lie detection system” for entry into the EU.

The Luxembourg judges ruled in the case (Az .: T-158/19) that the research agency based on the EU regulation for freedom of information in the public sector an ethical and legal assessment of technologies for “automatic deception detection” and for an automated one Publish risk assessment. Exceptions are parts that specifically relate to iBorderCtrl.

On the other hand, the authority does not have to issue test reports on ethical risks such as potential stigmatization and false reports, as well as an analysis of the legal admissibility of the specific technology tested as part of the research project. Reports on the results of the initiative will also remain hidden from the public for the time being.

In these cases, the court recognized the overriding protection of the consortium’s commercial interests. According to him, the public’s interest in transparency is satisfied by the fact that those involved have to publish a scientific report on the project within four years.

In 2016, the EU Commission provided 4.5 million euros as part of the Horizon 2020 research program for the development of a new border control system entitled “Intelligent Portable Border Control System”. Among other things, this should rely on artificial intelligence (AI), biometric identification, automated deception detection, authentication of documents and cumulative risk assessment. It was also planned that travelers to the EU would be questioned by an avatar who – like a polygraph – checks answers for their truthfulness.

After the project became known, critics complained that it would cut deeply into the privacy of those affected, contribute to discrimination and not stand up to scientific scrutiny. In March 2019, Breyer requested the release of several documents on impact assessments and initial results that had previously been kept under lock and key and complained when the research agency largely rejected the application.

The authority alleged in court that it had examined all the documents concerned individually and found that they contained inside knowledge of the members of the consortium on “intellectual property, ongoing research, know-how, methods, techniques and strategies”. Spreading this knowledge would give potential competitors an advantage.

The judges accepted the plaintiff’s view that there was a public interest in an informed democratic discussion about “whether control technologies such as the one in question are desirable and whether they should be financed by public money”. However, since it was “only” a research project, Breyer was unable to demonstrate “that the principle of transparency in the present case was particularly urgent”. Therefore, not all of the coveted papers would have to be made public immediately.

In addition, the court underlined that the relevant ordinance on inspection of the files “is intended to grant the public the greatest possible right of access to the documents of the institutions”. Exceptions to this should be interpreted narrowly. As far as the concept of business interests is concerned, “not all information about a company and its business relationships can be protected”. The defendant agency must bear the greater part of the costs of the dispute.

Breyer spoke of a “landmark judgment” that “will generally give a boost to public discussion about highly dangerous technology for mass surveillance, crowd control and screening”. “Business secrets” are “no longer a murder argument” in the future.

The decision underlines “the risky nature of experiments with borderline technologies such as AI-based lie detectors,” said Petra Molnar from the Refugee Law Lab. “We need tighter controls that recognize the very real damage caused by these experimental and harmful technologies.”

The decision also affects the planned EU rules for AI, emphasized the civil rights organization European Digital Rights (EDRi), systems that use physical, physiological and behavioral data of people in ways that threaten fundamental rights are no longer permitted. The EU committees would also finally have to rely on a broad ban on biometric mass surveillance, for example through automated face recognition, as is also demanded by the EU Parliament, for example. The guidelines should draw a red line against questionable practices such as iBorderCtrl. The Commission is meanwhile already financing a follow-up project with “Trespass”.


(tiw)

To home page



Source link -64