AI Governance

Right to Explanation

The principle that individuals affected by AI-driven decisions should receive a meaningful explanation of how the decision was made, including the logic involved, the significance of the processing, and its envisaged consequences. Grounded in GDPR and reinforced by the EU AI Act's transparency requirements.

Why It Matters

People deserve to understand why an algorithm denied their loan, rejected their job application, or flagged them for investigation. The right to explanation is both a legal requirement and a trust-building mechanism.

Example

When a job applicant is rejected by an AI screening tool, the company provides an explanation: 'Your application was assessed on years of relevant experience, technical skills match, and education level. The primary factor in the decision was insufficient match on required cloud architecture experience.'

Think of it like...

The right to explanation is like requiring a teacher to explain why a student received a particular grade — 'the computer said so' is not an acceptable answer when the outcome matters.

Related Terms