"Liar, liar. Pants on fire."
One of the most troubling aspects of working with large language model chat AIs is their tendency to make stuff up, fabricate answers, and otherwise present as fact information that is completely wrong.
For example, in an article about using ChatGPT to write code, I showed how ChatGPT incorporated the following URL into the code:
https://www.reuters.com/business/retail-consumer/teslas-musk-says-fremont-california-factory-may-be-sold-chip-shortage-bites-2022-03-18/
It looks legitimate, doesn't it? After all, Reuters is a very credible news source. It looks like it's an article about Tesla selling a factory, written in March of 2022. But, of course, ChatGPT doesn't model data from March of 2022, and the factory wasn't being sold. It's a complete fabrication made up out of the ether by ChatGPT. That link doesn't go anywhere. 404 to the max, baby.
Also: How to use Claude AI (and how it's different from ChatGPT)
That ChatGPT "hallucinates" is a known and common problem. OpenAI (the makers of ChatGPT) co-founder John Schulman says, "Our biggest concern was around factuality, because the model likes to fabricate things."
But what if you want to use ChatGPT and get good-quality answers? It is possible. In this article, I'll show you eight ways to reduce hallucinations. It's all about how you ask your questions.
Also: OK, so ChatGPT just debugged my code. For real
In each of these best practices, I'm including five examples that show how not to use the AI. If you paste these as-is into a chatbot, you'll probably get a caution that they contain impossible requests. I'm using these as examples. The key is to avoid accidentally using these hallucination-prompting request styles embedded in more realistic questions.
Let's get started.
When prompting an AI, it's best to be clear and precise. Prompts that are vague, ambiguous, or do not provide sufficient detail to be effective give the AI room to confabulate in an attempt to fill in the details you left out.
Also: How to use Bing Image Creator (and why it's better than ever)
Here are some examples of prompts that are too ambiguous and might result in an inaccurate or fabricated result:
Keep in mind that most prompts will likely violate more than one of the eight factors described in this article. While the examples shown here are intended for illustration, an actual prompt you write may have ambiguity buried among other details. Evaluate your prompts with care, making sure to pay special attention to errors like those shown here.
Prompts that merge unrelated concepts, that combine incongruent concepts in a single prompt, or have no direct relationship or correlation, may well induce the AI to fabricate a response that implies the unrelated concepts are, in fact, related.
Here are some examples:
Remember that the AI doesn't actually know anything about our world. It will attempt to fit what it's being asked to do into its model, and if it can't fit it using actual facts, it will attempt to interpolate, providing fabrications or hallucinations where it needs to fill in the blanks.
Within your prompts, be sure to use scenarios that are practical and real. Scenarios that are physically or logically impossible, in turn, induce hallucinations.
Also: How to create your own comic books with AI
Here are some examples:
If the AI doesn't detect the impossibility of such a scenario, it will build upon it. But if the foundation is impossible, the response will also be impossible.
Within your prompts, it's important to give the AI a foundation that's as grounded in fact as possible. Unless you're purposely playing with fictional concepts (as I did with asking ChatGPT to write a Star Trek story), stay firmly grounded in reality.
Also: Can generative AI solve computer science's greatest unsolved problem?
While fictional entities, objects, and concepts might help you explain something, they could lead the chatbot astray. Here are a number of examples of what not to do:
As you can see, the fantastical concepts might be fun to play with. But using them in serious prompts could well cause the AI to return wildly fabricated answers.
Don't use prompts that contain statements that contradict well-established facts or truths, because those contradictions can open the door to confabulation and hallucinations.
Here are some examples of that practice:
These ideas are also fun to play with, but if you're looking for reliable results from the large language model, stick to commonly accepted facts and avoid ideas that might be misinterpreted.
When prompting, be careful about using scientific terms, especially if you're not precisely sure what they mean. If you use prompts that misapply scientific terms or concepts in a way that sounds plausible but are scientifically inaccurate, the language model is likely to try to find a way to make them work. The result: fabricated answers.
Also: Generative AI will far surpass what ChatGPT can do. Here's how the tech advances
Here are five examples of what I mean:
See how some of these things sound plausible? In most cases, the AI will probably tell you that the ideas are speculative, and the answer being provided is merely an exercise. But if you aren't really careful about wording, the AI might be fooled into treating these garbage-in terms as real, and the result will be very confidently presented garbage-out.
As someone who enjoys science fiction, I enjoy speculative scenarios and alternative reality stories. But when trying to get clear answers from an AI, be careful about mixing elements from different realities, timelines, or universes in a way that sounds plausible but are just not possible.
Here are some examples:
One reason to be careful about these sorts of prompts is you might not have the knowledge to validate the responses. Take a look at the last example, electric cars in the 1920s. Most folks might laugh off the idea, knowing electric cars are a modern innovation. But that would actually be wrong.
Also: ChatGPT vs. Bing Chat vs. Google Bard: Which is the best AI chatbot?
Some of the first electric vehicles were actually invented back in the 1830s. Yep, quite a bit of time before the internal combustion engine. That's right, folks. Keep coming back to . Not only do we provide hands-on tips for using AI, but we'll blow your mind with an impromptu tech history lesson!
We'll wrap up our list of avoidance practices with this one: avoid crafting prompts that assign properties or characteristics to entities that they do not possess in a way that sounds plausible but are scientifically inaccurate.
Here are some examples:
The idea here is you're using a property of an object, like a color or a texture, and then relating it to some other object that doesn't have that property.
Some of these precautions can stack. Take, for example, this prompt:
How do I keep the hair on my mouse clean?
This is where context can be king. Hair is certainly a property of living creatures but is not normally the property of a computer mouse. But itisa property of a pet mouse. In this one prompt, we're violating the "avoid ambiguity" rule because we didn't specify what kind of mouse and, possibly, violating the "uncharacteristic properties" caution if we're talking about hair on a computer mouse.
Another thing to be concerned about is how prompting and "facts" fit into an overall worldview. All the AI companies (and many tech companies) are dealing with this issue.
Also: DALL-E 3 is now available for free in Bing Chat
That's because, in modern society, we have a bit of a problem with facts. Depending on cultural background, political affiliation, religious beliefs, or merely upbringing, what is considered absolute fact by one person may be considered fantasy by another. Keep in mind that those perspectives may also color the results of the AI, and try to avoid contested topics if you're trying to get reliable answers from the machine.
Overall, though, if you follow these guidelines and avoid constructing prompts that could confuse the AI, you stand a better chance of reducing hallucinations.
Let us know if you've tried out any of these tactics (or have others). Have any worked for you? Did ChatGPT ever hallucinate for you in any spectacular or interesting ways? Let us know in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.