Hallucination (AI)
When a generative AI model produces outputs that are factually incorrect, fabricated, or inconsistent with reality, while presenting them with apparent confidence. Hallucinations are an inherent property of how language models generate text — they produce statistically plausible sequences, not verified facts.
Why It Matters
Hallucinations undermine the reliability of every AI-generated output. In high-stakes domains like healthcare, legal advice, and financial reporting, an AI hallucination presented as fact can cause real-world harm and significant liability.
Example
A legal AI assistant asked to find relevant case law generates citations to cases that don't exist — complete with fabricated case names, docket numbers, and judicial opinions. A lawyer who submits these to court faces sanctions and professional liability.
Think of it like...
AI hallucination is like a very confident tour guide who makes up historical facts when they don't know the answer — the delivery is polished and convincing, but the information is fabricated.
Related Terms
Large Language Model (LLM)
A type of foundation model trained on massive text datasets that can understand, generate, and reason about human language. LLMs like GPT-4, Claude, and Gemini use transformer architecture and typically have billions of parameters, enabling capabilities from summarization to coding to complex reasoning.
RAG (Retrieval-Augmented Generation)
An AI architecture that combines a language model with external knowledge retrieval to ground its responses in specific, verifiable sources. Instead of relying solely on what the model memorized during training, RAG retrieves relevant documents at query time and generates answers based on that retrieved context.
Grounding
The practice of connecting AI model outputs to verifiable sources of information, ensuring responses are based on factual data rather than the model's potentially unreliable internal knowledge.