Hallucination
When a model confidently generates false or fabricated information not supported by its training data or context.
Full Definition
Hallucination occurs when a language model produces factually incorrect, invented, or internally inconsistent content with apparent confidence. The term is borrowed from psychology: like a hallucinating person, the model produces vivid 'perceptions' with no basis in reality. Hallucinations arise because LLMs are trained to produce fluent, plausible text rather than to verify factual claims. They are most common for obscure facts, recent events, numerical data, citations, and specific names. Mitigation strategies include retrieval-augmented generation (grounding), chain-of-thought prompting, self-consistency sampling, tool use for real-time data, and output verification pipelines. Hallucination is the single most important limitation of current LLMs for production use.
Examples
A model confidently citing a 2019 paper by 'Smith et al.' with a realistic-looking DOI that does not exist.
An LLM stating that a country's current prime minister is a person who left office two years before the model's training cutoff.
Apply this in your prompts
PromptITIN automatically uses techniques like Hallucination to build better prompts for you.
Related Terms
Grounding
Connecting model outputs to verifiable external sources to reduce hallucination …
View →RAG (Retrieval-Augmented Generation)
Augmenting model responses by retrieving relevant documents from an external kno…
View →AI Safety
The interdisciplinary field studying how to develop AI systems that are safe, re…
View →