At Microsoft, the AI ​​ethics team… no longer exists


Mathieu Grumiaux

March 14, 2023 at 2 p.m.

4

AI artificial intelligence © Shutterstock

© Shutterstock

Microsoft wants to move as quickly as possible in the deployment of ChatGPT through its products, and preferred to eliminate its security teams to save time.

Artificial intelligence is considered today by the tech giants as the new Eldorado and these companies are engaged in a sprint to be the first to impose themselves on this highly strategic market.

Forced adoption of artificial intelligence by tech giants

Microsoft is now one step ahead of its competitors. The Redmond publisher has invested nearly $10 billion in the company OpenAI, responsible for ChatGPT, and its close past ties with the company specializing in artificial intelligence have enabled it to accelerate the integration of this technology within Bing. , its search engine.

Microsoft does not intend to stop there and wants to introduce artificial intelligence in all of its products, from Windows to Office via Teams.

Artificial intelligence nevertheless raises ethical questions, and it is not Microsoft who will say the opposite. The integration of ChatGPT into Bing hasn’t been smooth sailing, with sometimes aggressive responses from the assistant after a few questions or inappropriate remarks on very sensitive topics.

Microsoft had a team responsible for dealing with these ethical issues and anticipating the problems induced by the development of these artificial intelligence systems, but we now learn that this division is now ancient history and that its members were removed from their posts.

Microsoft puts ethical questions under the rug, to go ever faster with ChatGPT

This disappearance of the team in charge of ethics in AI is not new. As reminded The Verge30 engineers were responsible for this issue in 2020, but they were down to seven employees in October 2022.

Microsoft made the choice to permanently delete this entity a few weeks ago to speed up the deployment of ChatGPT in its various software, under pressure from Satya Nadella, CEO of the publisher, and Kevin Scott, technical director of Microsoft.

The remaining members of this dedicated ethics division have been placed within the various product teams, to provide their point of view and expertise to developers. For its part, Microsoft is committed to developing AI products and experiences in a safe and responsible manner, and does so by investing in people, processes and partnerships that put them first “. The company retains a position responsible for “ responsible artificial intelligence “, responsible for enacting the main principles of the company in terms of ethics, even if its influence seems considerably reduced.

The problems remain very real, even if Microsoft seems to have nothing to do with them. In 2022, engineers responsible for ethical issues had already raised concerns related to Bing Image Creator, which uses OpenAI’s DALL-E system to create images generated entirely by artificial intelligence. This very powerful tool could ultimately harm artists and illustrators who make a living from their art and raised copyright issues, according to Microsoft ethics officers. Despite several notes on the subject, the management decided to do without the group’s opinion and to launch its product in several markets, taking the risk of seeing its brand image tarnished by one or more complaints.

Source : The Verge



Source link -99