“Threat to humanity”: This is what you need to know about the super AI Q*

The warning about a potentially dangerous development in artificial intelligence allegedly played an important role in the dismissal of Sam Altman as head of the ChatGPT provider OpenAI. What is behind the alleged superintelligence Q*?

Until a few days ago, the so-called superintelligence was almost a pipe dream. Many were amazed at what artificial intelligence can already do and at how quickly software developers release new and better programs. But even many experts couldn’t imagine that AI would be smarter than humans, at least not yet. It was often heard in expert circles that it would still take years to develop superintelligence. But now there is speculation that an important breakthrough may have already been achieved.

The reason for this is the new project by ChatGPT inventor OpenAI called Q* (“Q-Star”). The model should be able to independently solve mathematical problems that it did not know before – from the point of view of experts, this would be a milestone towards “Artificial General Intelligence”, AGI for short or, colloquially, superintelligence.

As the news agency Reuters and the magazine “The Information” reported, Q* is also said to have played a role in the dismissal of the now reinstated CEO and OpenAI co-founder Sam Altman. According to the two sources, a test version of the model, which was probably circulating within OpenAI, is said to have alerted security experts. An internal letter to staff apparently warned that the development of Q* could pose a “threat to humanity”.

“Nobody knows exactly what it is”

But what can the program do that has caused such waves of fear in software company circles? “Nobody knows exactly what it is,” says Damian Borth, academic director of the doctoral program in computer science at the University of St. Gallen. “There’s no blog post or paper that’s been published. There’s just speculation and that’s what’s interesting.” Like many others in the community, he suspects the “Q” in the name is a reference to so-called Q-Learning. This is an algorithm from reinforcement learning, a machine learning method. To put it simply, programs interact with their environment, make decisions and receive a reward for a positive action. This strengthens them (reinforcement) and carries them out more often, vice versa for negative actions.

Others in the OpenAI online community, however, suspect quantum computing behind the project’s code name. Quantum computers are extremely powerful and can solve specific complex problems with many variables faster than traditional computers. However, Borth considers this unlikely. “OpenAI hasn’t done much in this area, but has clearly relied on GPUs, i.e. graphics processors,” he says. “In reinforcement learning, however, OpenAI has always been very strong. In addition to generative AI, to which ChatGPT belongs, this is one of the central pillars.”

The community suspects that behind the asterisk in Q* is the “A*” algorithm, which can determine the shortest path between two nodes or points. To do this, it does not blindly select the next accessible node, but rather uses additional information to speed up the search.

Users openly express skepticism

Although there is almost no reliable information about Q*, many in the community are already declaring the new AI model to be the “greatest breakthrough in human civilization”, a “revolution” and a “groundbreaking” system. Big words for the fact that, according to Reuters and “The Information”, Q* can only solve math problems at elementary school level.

Some users therefore openly express skepticism: “As someone who has done a lot of research on AI, I can say that it is very easy to believe that you have achieved a breakthrough,” writes one. Another writes that “human or super-human intelligence” needs a “different architecture.” “Q* is a movement in this direction, but it is by no means clear whether it is “that,” writes one user at OpenAI.

In fact, the special thing about Q* is that it can solve mathematical problems independently. “According to the current state of knowledge, this is the first time that AI has succeeded in achieving the kind of intellectual performance required in mathematics,” says Borth. “So the machine doesn’t just parrot, as skeptics at ChatGPT say, but Q* is said to have the ability to draw logical conclusions.” Whether this is also a decisive step towards AGI cannot yet be said.

“For one thing, the definition of AGI is not entirely clear. Is it a machine that is self-aware, that works against humans, or that simply generalizes across multiple tasks?” says Borth. “On the other hand, in my opinion, AGI is not necessary to be dangerous to people. Depending on how we deal with our current systems, this could already happen.”

Altman is considered the face of the AI ​​boom

The unrest also stems from the fact that the company itself allegedly warned against this. Security experts are said to have been particularly upset by the pace of development, reports “The Information”.

Altman, who is considered the face of the AI ​​boom and is said to have had the goal from the start of teaching computers to learn on their own, said this about the potential risks of AI at a US Senate hearing this year: “My worst fears are that we are causing significant damage to technology and industry. […] I think if this technology goes wrong, it can go very wrong. And we want to be vocal about that,” said Altman, who is now back as CEO of OpenAI after an unprecedented back-and-forth.

The board initially fired Altman almost two weeks ago without giving reasons and named an interim CEO twice. Last Wednesday, however, the pressure from major investor Microsoft became too great and Altman returned to his post. At the same time, a new board was appointed, including former US Treasury Secretary Larry Summers. According to Sarah Kreps, director of the Tech Policy Institute in Washington, the new board supports Altman’s vision of accelerating the development of AI while providing security precautions.

This article is first at Capital.de appeared.

source site-32