“We need rules”: AI does not let Google boss sleep at night

Although the Internet giant Alphabet is itself pushing artificial intelligence massively, its boss is urgently demanding political control for the computer revolution. Otherwise, at the end of the technological arms race, there could be a “god-like” machine that no one can control anymore.

With this interview he should cause a stir: the head of one of the largest developers for artificial intelligence demands strict rules for his own products. With a view to the rapid development of the revolutionary technology, he is particularly concerned with “the pressure to use it in an advantageous way, but at the same time it can be very harmful if used incorrectly,” Google CEO Sundar Pichai told US TV -Magazine “60 Minutes”. “We don’t have all the answers yet, and technology is advancing fast. Does that keep me from sleeping at night? Absolutely.”

Sooner or later there will have to be regulation, Pichai appealed. Because “anyone who has ever worked on AI recognizes that this is something so different and deep that we need social rules for how we adapt.” For example, Pichai demands: “We need laws against deepfakes, and there must be consequences if someone creates deepfake videos that harm society.” The AI ​​is already so far advanced that it can imitate the appearance and voice of a human being in a deceptively real way.

The Google boss warns: The consequences of technology could be greater for mankind than the discovery of fire or electricity. “It goes to the core of what intelligence is and what makes up humanity.” The big question is: “Will humanity ever lose control of the technology it develops?”

Blind flight into the downfall of mankind

Google and other internet giants like Microsoft and China’s Baidu are currently in the midst of a technological arms race for supremacy on the AI ​​frontline. The Silicon Valley group is already using the technology for photo apps such as Lens or Google Photos, but is otherwise taking a rather cautious course – so that society has enough time to get used to the new AI world, as Pichai says. OpenAI, on the other hand, pushes much harder with ChatGPT.

Not only Pichai, but also other industry insiders have long been demanding strict rules for the development of artificial intelligence – for fear of the potentially catastrophic consequences if it gets out of control. “We need to slow down the race to godlike AI,” warned Ian Hogarth, an AI researcher and venture capital investor in dozens of AI startups, in the Financial Times last week. Above all, Hogarth is concerned about the urge to develop “artificial general intelligence” (AGI), i.e. general artificial intelligence, which OpenAI and Co. have explicitly written on the flag.

Because the technocratic designation does not come close to conveying the technological and civilizational break that it would mean: “a super-intelligent computer that learns and develops independently, that understands its environment without supervision and can change the world around it.” This could lead to the “destruction of the human race or render it obsolete.” The AI ​​companies would “run toward a finish line without understanding what’s on the other side.”

Billions for the genie out of the machine

According to Hogarth, they’re not there yet. But they are getting closer and closer to the goal. This is mainly due to two factors: massive capital investment and exponentially growing computing power. In the last ten years it has increased dramatically – by a factor of 100 million. Instead of small databases, the algorithms would now be fed with the entire Internet. That’s why they could already pass exams, write software – and actively deceive people, such as OpenAI’s GPT4.

In addition, the race is primarily driven by money. Since the release of ChatGPT in winter, a gigantic wave of capital has swept into the industry. Since the beginning of the year alone, eight of the leading AI companies have raised more than $20 billion. In the first wave of financing from Deepmind in 2012, on the other hand, only 23 million dollars were invested. Hogarth finds it wrong that “consequential decisions, potentially affecting every life on earth, are made by a small group of private companies without democratic control”.

Incidentally, in the “60 Minutes” article, Google gives an exclusive insight into an experimental tool that the company hasn’t even released yet: At the push of a button, it converts text input into moving images. It is possibly the entry into a world in the not too distant future in which human imagination becomes digital reality. This is demonstrated in the film using a flying golden retriever. For security reasons, Google’s video generator is not allowed to generate images of people. So far.

source site-32