“We must clearly define the objective that the Family Allowance Fund wishes to achieve with its algorithm”

PTo effectively combat illegal practices, supervisory authorities are constantly integrating new technologies. Currently, they are turning more and more towards algorithms. But, as highlighted by the survey published on December 4, 2023 by The world on the practice of targeting by the Family Allowance Fund (CAF), or the debates around the use of identification systems during the next Olympic Games in Paris, the use of these technologies can generate a certain number of concerns .

The phenomenon is particularly the case when it affects populations in vulnerable situations. If the concerns raised by the investigation World are above all ethical and moral, the economic perspective can also provide insight that we offer here.

The constraints, particularly budgetary ones, which weigh on the institutions responsible for detecting fraud push them to adopt a set of measures to maximize their effectiveness, in particular the targeting of controls, rather than completely random controls. This targeting, operated by humans or algorithms, is generally based on a set of observable parameters reflecting a higher probability of fraud.

Thus, alcohol controls are reinforced on Saturday evenings around party venues, and are less frequent during the week. Likewise, the CAF has little or no control over groups whose risk of fraud is unlikely or zero, such as, of course, non-beneficiaries of the CAF.

Also read the column: Article reserved for our subscribers “Artificial intelligence systems will amplify gender bias in all areas”

The use of sensitive parameters may give rise to discrimination. For this, a certain number of criteria are already prohibited in France, such as ethnicity or gender. The Court of Justice of the European Union (CJEU) has just ruled, Thursday December 7 2023, that any decision-making that uses rating systems using personal data is illegal.

Statistical correlations

However, a protected parameter can be indirectly approximated using authorized criteria thanks to statistical correlations. For example, shoe size may indicate gender, while neighborhood of residence may suggest ethnicity. The potential use of thousands of correlations thus makes it difficult to prevent the use of protected criteria. Concerning the indirect discrimination of the CAF algorithm, it will therefore be a question of estimating it, and determining whether it is justified by a legitimate aim.

You have 60% of this article left to read. The rest is reserved for subscribers.

source site-30