AI Hallucination
A phenomenon where an AI model generates convincingly-sounding but factually incorrect or fabricated information.
What is AI hallucination?
Hallucination is a situation where an AI model generates a response that sounds confident and convincing but is actually false or fabricated. The model "doesn't know that it doesn't know" - it has no fact-checking mechanism, so it simply makes things up.
Examples of hallucinations
- Invented citations of books or scientific papers that do not exist
- Incorrect data, numbers, or historical events
- Fictitious people or companies presented as real
How to prevent hallucinations
- RAG: The model only answers from verified documents you provide
- System prompt: Instructions like "If you don't know, say so"
- Lower temperature: Less creativity means less fabrication