Artificial Intelligence

Bias (AI)

Systematic errors in AI system outputs that produce unfair or skewed outcomes. AI bias can originate from training data (historical, representation, measurement, sampling, or aggregation bias), from model design choices, or from the deployment context. Bias is not always obvious and can compound through the AI lifecycle.

Why It Matters

AI bias is the most common source of real-world AI harm. It has led to discriminatory hiring tools, racially biased criminal risk scores, and gender-skewed credit limits — each a governance failure that proper testing and monitoring could have caught.

Example

An AI-powered healthcare allocation system was found to assign lower risk scores to Black patients than equally sick white patients, because it used healthcare spending (a proxy that reflects systemic access disparities) rather than actual illness severity as a training signal.

Think of it like...

AI bias is like a crooked measuring tape — every measurement looks precise, but they're all systematically off in the same direction, and the error compounds the more you build on it.

Related Terms