OpenAI affair: the race for AI supremacy is no longer played only between nations


The past week has been breathtaking for OpenAI. And it now seems certain that a deal has been reached for its co-founder and CEO Sam Altman, who had been ousted, to take over the reins of the company.

The move comes after several twists and turns, during which OpenAI lost its CEO, saw him join Microsoft, replaced its first interim CEO with a second interim CEO, and faced a staff revolt.

No one knows at this time whether Altman will get a seat on the board of directors. A seat he didn’t have before. He said in an interview with Bloomberg in June 2023 about trust in AI: “You shouldn’t trust just one person… The board of directors can fire me. I think that’s important. ”

Furthermore, it is not yet known whether OpenAI co-founder and chief scientist Ilya Sutskever will return. Mr. Sutskever was on the previous board of directors, along with his colleague Helen Toner, and both were suspected of having played a role in the decision to remove Mr. Altman. Sutskever, however, later expressed his regret for having acted in this way.

Helen Toner, who had remained silent throughout the affair, finally declared on after it was revealed that Altman would return: “And now we all go to sleep.”

Helen Toner vs. Sam Altman

Helen Toner had co-authored a research report that Altman said was critical of OpenAI. The report highlighted that the company’s efforts to ensure the security of its AI developments were less extensive than those of its competitor Anthropic. According to an article from New York Timesthis report angered Sam Altman enough that he campaigned for the removal of Helen Toner from the board of directors.

In its initial statement announcing Altman’s firing, OpenAI’s board said it no longer had confidence in his ability to lead the company.

The board also noted that OpenAI, established as a nonprofit organization in 2015, was “structured to advance our mission” of ensuring that artificial general intelligence (AGI) will benefit all of humanity. “The Board remains fully committed to serving this mission…we believe new leadership is needed moving forward,” the statement read.

The future of AI is in the hands of a very small group of actors

The company was restructured in 2019 to allow it to raise capital to continue its mission, while “preserving” the governance and oversight of the nonprofit. “While the company has experienced spectacular growth, the Board of Directors’ fundamental governance responsibility remains to advance OpenAI’s mission and preserve the principles of its charter,” it said.

The board, including Toner and Sutskever – who are said to be concerned that Mr Altman is prioritizing expansion over AI safety – choosing to remain largely silent on the reasons who motivated the decision to fire Mr. Altman, speculation has multiplied on social media.

As tensions between Mr. Altman and the board grew, most observers noted that the debate was most likely between AI safety and corporate profit. And therein lies the crux of the problem. This is still assumptions and speculation, as there simply isn’t enough information, if any, about what the OpenAI board actually has concerns with.

What facts did Mr. Altman omit or lie about? Is OpenAI research and development now close to AGI (artificial general intelligence)? The board isn’t sure “all of humanity” is ready for this? Should the general public and states also be concerned?

If there’s one thing that’s become even clearer over the past week, it’s that the future of AI is largely in the hands of a very small group of market players.

Big Tech has the resources to determine the impact of AI on society

Big Tech players have the resources to determine the impact of AI on society. However, this technological elite only represents a tiny part of the population.

In the space of a few days, she managed to maneuver Altman’s ouster, his hiring at Microsoft (albeit short-lived), the potential transfer of almost the entire OpenAI workforce towards another major player in the market, and the possible reintegration of Altman.

And they did all this without explaining why he was fired and to verify or refute concerns about prioritizing AI safety over OpenAI profits.

It was also mentioned that OpenAI’s new board of directors would launch an investigation into the reasons behind Altman’s dismissal. But this would be an internal investigation.

Practice what AI transparency preaches

Transparency is essential to the development and adoption of any AI – generative, AGI or otherwise. Transparency is the foundation of trust, which most people agree AI must be built for human acceptance.

Large technology companies also preach the importance of transparency in the implementation of responsible and ethical AI.

And when there is none, transparency must then rely on regulation. We need legislation that does not seek to inhibit market innovation in AI, but focuses on requiring transparency in how that innovation is developed and advanced.

The OpenAI case should allow governments to learn about how the development of AI should progress. We have also witnessed the complexity of managing AI, even if its development is linked to a non-profit business framework.

The fact that a key employee had to resign in order to speak freely about the risks of AI makes it clear that market participants are unlikely to be fully transparent in their development, even if they have committed to doing so. .

This highlights the need for strong governance to ensure they do so, and the urgency of putting such governance in place.

Lawmakers will need to act quickly. The UK-led Bletchley Declaration on AI Safety is a big step forward. Indeed, 28 countries, including China, the United States, Singapore and the European Union, have agreed to collaborate on the identification and management of potential risks linked to “avant-garde” AI. The multilateral agreement highlights countries’ recognition of the “urgent need” to ensure that AI is developed and deployed in a “safe and responsible” manner for the benefit of the global community.

The United Nations also plans to establish an advisory team to review international AI governance to mitigate potential risks, pledging to take a “globally inclusive” approach.

I hope people in these organizations and governments are taking notes on the OpenAI case. Indeed, the debate is no longer just about which country will dominate the AI ​​race, but also about whether big tech companies will take on the necessary safeguards.


Source: “ZDNet.com”





Source link -97