Ethics, industrialization and security, green AI…The 2022 trends in artificial intelligence


Faced with the acceleration of climate change and the awareness of the contribution of artificial intelligence to carbon emissions, two concepts stand out: Green AI and AI for green.

The first promotes an artificial intelligence that consumes less energy and resources, and which aims to reduce its own carbon impact. To do this, creators can use less complex models, limit the number of model drives or even make a compromise between technical performance and energy consumption.

The second concept, “AI for Green”, qualifies artificial intelligence developed for sustainable use and in the service of the environment: optimization of car journeys to save fuel, prediction of extreme climatic phenomena and anticipation of their impacts, etc. These applications of artificial intelligence therefore make it possible to facilitate the sustainable transition, and this, in a majority of economic sectors.

These two concepts ultimately meet the same need, which is the – urgent – ​​need to participate in efforts to reduce carbon emissions. In 2022, it is essential for the creators of artificial intelligence algorithms to integrate them: by beginning a reflection upstream on the purpose and usefulness of the algorithm, then by favoring frugal AI for the development and put into production.

Moving from successful POC to industrialization

If we see a strong increase in maturity on the part of companies on the creation of Machine Learning algorithms, the increase in the number of algorithms in production remains without noticeable effect: the POC/industrialized algorithms ratio remains low, even zero for some companies. To facilitate the industrialization of algorithms, the major digital players have gradually adapted DevOps techniques to apply them to Machine Learning and give MLOps.

Putting a Machine Learning model into production involves different steps to take into account from the design of the model to its integration into the existing infrastructure and processes. Among the elements necessary for the proper functioning of the model in production, we find the acquisition and cleaning of data, the automation of training, the monitoring of experiments, the deployment or the monitoring of model drift.

Already identified last year, MLOps continues to represent an opportunity for companies to seize: the technical challenges and the scarcity of skills in data-engineering and DevOps must be taken into account to put a Machine Learning model into production. The increase in demand for AI in production among customers is notorious, regardless of the sector, and this trend is expected to accelerate in 2022.

Prevent and protect AI solutions in production

The increase in the number of AI solutions put into production and therefore accessible to users leads to an increase in attacks and hijackings of these tools. These attacks can fool a model by modifying the input: for example a binary text classifier considers the sentence “eat a child” as “bad” whereas “eat a child because I was very hungry” can be considered as “good ”. Add to this the possibility that the algorithm is not protected: it can then reveal the data with which it was trained, generating serious consequences when it comes to personal data.

To deal with these threats, three measures are recommended: (1) carry out technology watch to identify new attacks and be aware of the subject, (2) when building an AI product, integrate from the scoping the identification of potential threats and security constraints (of the model, and the infrastructure that hosts the solution, the type of access, …) and (3) during the development of the model, use security protection techniques training data such as differential privacy or knowledge distillation. Finally, another preventive method consists in systematically applying robustness tests to identify the vulnerabilities of the solution before it goes into production.

With all these parameters in mind, it is clear that we greatly benefit from anticipating the security and protection of artificial intelligence algorithms before they are put into production: this avoids drifts and the leak of confidential data.

Prioritize transparent artificial intelligence to gain user trust

In 2022, trusted artificial intelligence is moving from a set of theoretical values ​​and concepts to an operational lever that AI creators must integrate into the design of their algorithms. A trustworthy artificial intelligence by design must be designed by integrating throughout the life cycle the subjects of explicability of models, interpretability of decisions, but also the detection and management of biases or the possibility of intervention. human (human in the loop).

To meet this challenge of the operational integration of these values, there is a real challenge of acculturation and training of teams for companies, which have everything to gain as responsible artificial intelligence becomes one of the pillars of digital. European.

In the move from black box AI to transparent and explainable AI, everyone is a winner. It is of course necessary to gain and maintain the confidence of users of algorithms, but also to protect the image of companies, to avoid the marketing of deviant algorithms and finally, to anticipate future regulations.

Under the impetus of a European Union leader with its AI regulation project, this year, the design and use of artificial intelligence solutions should naturally be oriented towards responsible AI, known as “de trust”, and operational to meet user demand, as well as future regulations.

More than a trend, the appearance of several reference systems (AI certification from LNE, trusted AI label from Labelia, etc.) suggests that this AI will become the standard at European level, and soon worldwide? This week, the American collective Business Roundtable, representing 230 companies from all sectors, asked the Biden administration to establish rules in this direction. There is no doubt that in 2022, the number of certified companies should therefore increase.





Source link -97