Large Language Model (LLM)
A type of foundation model trained on massive text datasets that can understand, generate, and reason about human language. LLMs like GPT-4, Claude, and Gemini use transformer architecture and typically have billions of parameters, enabling capabilities from summarization to coding to complex reasoning.
Why It Matters
LLMs are the most widely deployed AI technology in enterprise settings today. Their governance challenges — hallucination, prompt injection, training data provenance, and emergent capabilities — define the frontier of AI governance practice.
Example
An enterprise deploys an LLM-powered assistant for internal use. The governance program must address: what data employees can share with the model, how hallucinated outputs are handled in decision-making, who is accountable for LLM-assisted work products, and how the model's behavior is monitored over time.
Think of it like...
An LLM is like a well-read colleague with a perfect memory for patterns but no ability to verify facts — they can write brilliantly, reason plausibly, and occasionally state confidently wrong things with the same polish as correct ones.
Related Terms
Foundation Model
A large AI model trained on broad data at scale that can be adapted to a wide range of downstream tasks. Foundation models serve as the base upon which specialized applications are built.
Hallucination (AI)
When a generative AI model produces outputs that are factually incorrect, fabricated, or inconsistent with reality, while presenting them with apparent confidence. Hallucinations are an inherent property of how language models generate text — they produce statistically plausible sequences, not verified facts.
Prompt Injection
A security vulnerability where malicious input is crafted to override or manipulate an LLM's system prompt or instructions, causing it to behave in unintended ways.