Fairness (AI)
The principle that AI systems should produce equitable outcomes and not discriminate against individuals or groups based on protected characteristics. Multiple mathematical definitions of fairness exist — demographic parity, equalized odds, individual fairness, and others — and they frequently conflict with each other, making fairness a design choice, not a single metric.
Why It Matters
Fairness isn't just an ethical ideal — it's a legal requirement in many domains (lending, hiring, housing). But choosing the right fairness definition for your context is a governance decision that requires stakeholder input, not just a technical checkbox.
Example
A lending model optimized for 'demographic parity' (equal approval rates across groups) might conflict with 'predictive parity' (equal accuracy across groups). The governance team must decide which definition of fairness aligns with legal requirements and organizational values for this specific use case.
Think of it like...
AI fairness is like dividing a cake fairly — equal slices (demographic parity), proportional to hunger (equity), or based on who contributed ingredients (merit) are all 'fair' by different definitions. The right choice depends on context.
Related Terms
Bias (AI)
Systematic errors in AI system outputs that produce unfair or skewed outcomes. AI bias can originate from training data (historical, representation, measurement, sampling, or aggregation bias), from model design choices, or from the deployment context. Bias is not always obvious and can compound through the AI lifecycle.
Disparate Impact
A facially neutral policy, practice, or algorithm that disproportionately harms a group based on a protected characteristic — even without discriminatory intent. In AI, disparate impact commonly occurs when models trained on historically biased data reproduce or amplify those patterns in their outputs.