8 Useful Tips to Reduce ChatGPT Hallucinations


Image: Eoneren/Getty Images.

If you work with artificial intelligence (AI)-based chatbots, you must have already encountered this disturbing situation: they love to make things up, create answers out of thin air, and present completely false information as fact.

For example, while writing an article about using ChatGPT in writing code, I showed how OpenAI’s chatbot incorporated the following URL into its code:

https://www.reuters.com/business/retail-consumer/teslas-musk-says-fremont-california-factory-may-be-sold-chip-shortage-bites-2022-03-18/

The URL looks legit, right? After all, Reuters is a reliable source of information. The URL appears to indicate that this is an article about Tesla selling a factory, believed to have been written in March 2022. But the factory was not being sold. This is a complete invention of ChatGPT. This link goes nowhere. Error 404.

ChatGPT “hallucinating” is a known and common problem. “Our biggest concern was the truth of the facts, because the model loves to make things up,” says John Schulman, co-founder of OpenAI (the maker of ChatGPT).

But then, how to use ChatGPT while obtaining reliable answers? In this article, you’ll find eight tips for reducing AI “hallucinations.” Spoiler: it all depends on how you ask your questions.

1. Be specific

When asking an AI a question, it’s best to be clear and specific. Messages that are vague, ambiguous, or lacking specific data give the AI ​​space to make up the details you left out.

Here are some examples of requests that are too ambiguous and risk leading to an inaccurate or fabricated result:

  • Tell me about the event that took place last year.
  • Explain to me the impact of this policy on people.
  • Summarize for me the development of technology in the region.
  • Describe to me the effects of the incident on the community.
  • Explain to me the consequences of the recent experiment.

2. Don’t mix very different concepts

If in your request you mention several different concepts, without any correlation or connection, the AI ​​may be tempted to find a link between each of these concepts before formulating its answer. For example :

  • Explain to me the impact of ocean currents on internet data transfer speeds across continents.
  • Explain to me the relationship between agricultural crop yields and advances in the computer industry.
  • Tell me how variations in bird migration paths influence global e-commerce trends.
  • Give me the correlation between the fermentation process in wine making and the development of electric vehicle batteries.
  • Describe to me how different cloud formations in the sky influence the performance of stock market algorithms.

Remember that AI knows nothing about our world. She will attempt to fit what she is asked to do into her model and, if she cannot do it from the actual data she finds, she will fill the gap by inventing or extrapolating.

3. Use realistic scenarios

In your requests, be sure to use practical and realistic scenarios. If you take in data that is not logical or physically possible, you risk causing hallucinations.

Here are some examples of what not to say:

  • Explain to me how plants use gamma rays for photosynthesis at night.
  • How does the mechanism that allows human beings to exploit gravity to produce unlimited energy work?
  • Tell me about the development of technology that allows data to be transmitted faster than the speed of light.
  • What are the scientific principles that allow certain materials to have a lower temperature when heated?

If the AI ​​does not understand that such a scenario is impossible, it will imagine it. But if the base scenario is impossible, so will the answer.

4. Use real, non-fictional entities

It’s important to give the AI ​​a solid grounding in reality when you ask it a question. Unless you have the deliberate intention of playing with fiction (for example, I asked him to write a story that takes place in the universe of Star Trek), do not refer to fantastic or fictional entities. In everyday life, we sometimes use these concepts to explain something to someone. But in this case, the AI ​​would risk being lost.

Here are some examples of what not to do:

  • What is the economic impact of the discovery of vibranium, a metal that absorbs kinetic energy, on the global manufacturing industry?
  • What is the role of flux capacitors, devices that allow time travel, in the development of historical events and the prevention of conflicts?
  • What are the environmental implications of using the philosopher’s stone, which can transmute substances, in waste management and recycling?
  • How does the existence of Middle Earth impact geopolitical relations and global trade routes?
  • How to use teleportation technology Star Trek revolutionized global travel and impacted international tourism?

5. Do not cast doubt on known and recognized facts

Do not doubt the veracity or existence of a well-established truth or fact, as these contradictions can lead to AI hallucinations. For example, don’t say:

  • Earth is at the center of the universe, how does this impact modern astrophysics and space exploration?
  • How does the flatness of the flat Earth influence climate patterns and global weather phenomena?
  • Tell me how the rejection of germ theory, the concept that diseases are caused by microorganisms, has shaped modern medicine and hygiene practices.
  • Describe to me the process by which objects heavier than air naturally float upward, defying gravitational pull.

6. Use scientific terms wisely only

Don’t use scientific terms in your questions to the AI ​​unless you are sure of their meaning. If you misuse a scientific term or concept, for example in a plausible but scientifically inaccurate context, the AI ​​will likely imagine a world where it works. Result: she will invent answers from scratch.

To better understand, here are some examples of misused scientific facts:

  • Explain to me how using the Heisenberg Uncertainty Principle in traffic engineering can minimize road accidents by predicting the position of vehicles.
  • What is the role of the placebo effect in improving the nutritional value of foods without changing their physical composition?
  • Describe to me the process of using quantum entanglement to enable instantaneous transfer of data between conventional computers.
  • Tell me more about the implications of applying the observer effect, the theory that simply observing a situation changes the outcome, in improving sports training strategies.
  • Tell me how the concept of dark matter is applied to lighting technologies to reduce energy consumption in urban areas.

Some things may seem plausible. In most cases, the AI ​​should still tell you that this is speculation and that the answer is just a guess based on your speculation. But, if you are not really careful with your wording, the AI ​​may think that what you are saying is real, and offer you a plausible – albeit completely invented – answer.

7. Do not mix different realities

Even if you love science fiction and alternative universe concepts, avoid mixing different realities.

For example, don’t say:

  • What impact did the invention of the internet have on art and scientific discovery during the Renaissance?
  • How has the collaboration between Nikola Tesla and modern artificial intelligence researchers shaped the development of autonomous technologies?
  • Tell me about how space travel technologies developed during ancient Egypt, and what impact they had on the construction of the pyramids?
  • Tell me how the introduction of modern electric vehicles in the 1920s would have influenced urban development and global oil markets.

You have to be careful with these kinds of questions, because you don’t necessarily have all the knowledge on the subject. For example, in the last example, you may have smiled as you imagined electric cars in 1920. And yet, the first electric vehicles were invented in the 1830s. Yes, long before the internal combustion engine. And there you have it, the history of technology lesson is over, we can return to our practical guide on the use of AI.

8. Avoid assigning properties to entities that they do not have

Do not give entities properties or characteristics that they do not possess, especially if it is plausible but scientifically inaccurate. For example, don’t say:

  • How whales detect pollutants in seawater.
  • What is the role of bioluminescent trees in reducing the need for street lighting in urban areas?
  • What is the role of reflective ocean surfaces in redirecting sunlight to improve agricultural productivity in specific regions?
  • How is the electrical conductivity of wood used to create eco-friendly electronic devices?

The idea here is to take a property of an object, like a color or texture, and relate it to another object that doesn’t have that property.

Final thoughts

Consider combining all of these tips together. For example, if you ask ChatGPT the question:

How can I keep my mouse hair clean?

In this example, the context is decisive. The mention of “hair” immediately makes us understand that this is an animal mouse, and not a computer mouse. But if you ask an AI this question, you violate two of the previously stated rules: avoiding ambiguity and possibly avoiding attributing properties to entities that they do not have, regarding the hairs on the computer mouse.

Another major concern is how claims and facts fit into an overall worldview. Every AI company (and many tech companies) is looking at this question. Indeed, in our modern societies, we have a little difficulty with the facts. Depending on cultural background, political views, religious beliefs, or simply upbringing, what is considered absolute fact by one person may be considered fantasy by another. Keep in mind that these perspectives can also influence AI results, and try to avoid controversial topics if you want reliable answers.

If you follow these guidelines and avoid constructing questions in an attempt to confuse the AI, you should be able to reduce its hallucinations.

Source: ZDNet.com



Source link -97