Partner, leader, or boss? We asked ChatGPT to design a bot and here’s what happened



European researchers are working on the design of a tomato-picking robot. Adrien Buttier/EPFL

At a time when some are alarmed by the fact that artificial intelligence (AI) is pushing towards the extinction of the human species, one could imagine that the process of designing a robot by an AI is similar to the Creation of the Terminator by Frankenstein. Or even the reverse.

But what if, in a dystopian future or not, we had to collaborate with machines to solve problems? How would this collaboration work? Who would be the boss and who would be the employee?

Having ingested many episodes of Dark Mirror, as well as Arthur C. Clarke’s novel “2001: A Space Odyssey”, I’d bet the machine would be the boss.

“We wanted ChatGPT to design a bot that was actually useful”

However, a real experiment of this type conducted by researchers has yielded original results that could have a major impact on the collaboration between machine and human.

Professor Cosimo Della Santina and PhD student Francesco Stella, both from TU Delft, and Josie Hughes from Swiss Technical University EPFL, conducted an experiment to design a robot in partnership with ChatGPT to solve a societal problem major. “We wanted ChatGPT to design not just a robot, but a robot that was actually useful,” Ms. Della Santina said in an article published in Nature Machine Intelligence.

Thus began a series of question-and-answer sessions, in order to determine what the two groups could design together. Large language models (LLMs) like ChatGPT are very good at dealing with huge amounts of text and data, and can produce consistent responses at lightning speed.

The fact that ChatGPT can do this with technically complex information makes it even more impressive – and a real boon for anyone looking for a super-powered search assistant.

Work with machines

When European researchers asked ChatGPT to identify some of the challenges facing human society, the AI ​​indicated the question of ensuring a stable food supply in the future.

There ensued a back-and-forth between the researchers and the robot, until ChatGPT chose tomatoes as the crop that the robots could grow and harvest – and in doing so, have a significant positive impact on the Company.


Hand with tomato-picking robot next to tomatoes


ChatGPT has made some helpful suggestions on how to design the gripper, so it can handle delicate objects like tomatoes. Adrien Buttier/EPFL

This is an area where the AI ​​partner was able to bring real added value. How ? By making suggestions in areas such as agriculture, where his human counterparts had no real experience. Choosing a crop with the greatest economic value for automation would otherwise have required time-consuming research by scientists.

“Even though Chat-GPT is a language model and its code generation is text-based, it provided significant ideas and insights for physical design, and showed great potential as a sounding board for stimulate human creativity,” said Hughes of EPFL.

The humans were then instructed to select the most interesting and appropriate directions to pursue their goals, based on the options provided by ChatGPT.

Smart design

But finding a way to harvest tomatoes is where ChatGPT is really good. Tomatoes and similar delicate fruits – yes, the tomato is a fruit, not a vegetable – pose the biggest challenge when it comes to harvesting them.


AI gripper next to tomatoes


The AI-designed gripper in action. Adrien Buttier/EPFL

When asked how humans could harvest tomatoes without damaging them, the robot did not disappoint and came up with original and useful solutions.

Realizing that any part that comes into contact with the tomatoes needs to be soft and flexible, ChatGPT suggested using silicone or rubber. ChatGPT also pointed to CAD software, molds and 3D printers as ways to construct these flexible artificial hands, and he suggested a claw or ball shape as design options.

The result is impressive. This collaboration between AI and humans has made it possible to design and build a functional robot capable of picking tomatoes with dexterity, which is no small feat considering how easily they are damaged.

The dangers of partnership

This unique collaboration also introduced many questions, which will become increasingly important in a human-machine design partnership.

A partnership with ChatGPT offers a truly interdisciplinary approach to problem solving. However, depending on how the partnership is structured, you might get different results, each with substantial implications.

For example, LLMs could provide all the details needed to design a robot, while the human would just act as an implementer. In this approach, the AI ​​becomes the inventor and enables the non-specialized layman to engage in robotic design.

Lack of human control

This relationship is similar to the experiment the researchers had with the tomato-picking robot. Although they were stunned by the success of the collaboration, they noticed that the machine did much of the creative work. “We found that our role as engineers shifted to more technical tasks,” Stella said.

This lack of human control is the source of dangers. “In our study, Chat-GPT identified tomatoes as the crop most deserving of harvesting by a robotic harvester,” Hughes said. “However, this result may be biased towards cultures that are more covered in the literature, as opposed to those for which there is a real need. When decisions are made outside the engineer’s field of knowledge , it may lead to significant ethical, technical or factual errors.”

And this concern, in a nutshell, is one of the serious dangers of using LLM. Their seemingly miraculous responses to questions are only possible because they have been fed a certain type of content and then asked to regurgitate parts of it.

Are you going to entrust the design of a robot to a machine that hallucinates?

The answers essentially reflect the bias – good or bad – of the people who designed the system and the data fed to it. This bias means that the historical marginalization of certain segments of society, such as women and people of color, is often replicated in LLMs.

And then there is the problem of hallucinations in LLMs. Here, the AI ​​is just making things up when faced with questions to which it has no easy answers.

There’s also the growing issue of proprietary information being used without permission, as several lawsuits against Open AI show.

Nonetheless, a balanced approach – where LLMs play more of a supporting role – can be rewarding and productive, forging vital cross-disciplinary connections that could not have been fostered without the bot. That said, you’ll need to engage with AIs the same way you do with your kids: diligently check all homework and screen time information, especially when it seems flippant.


Source: “ZDNet.com”



Source link -97