Prohibited AI Practice
AI applications banned outright under the EU AI Act due to their unacceptable risk to fundamental rights. Prohibited practices include social scoring by governments, manipulative AI that exploits vulnerable populations, untargeted facial image scraping, and most real-time biometric identification in public spaces by law enforcement.
Why It Matters
These aren't suggestions — they're absolute bans with the EU AI Act's highest penalties (up to 7% of global annual turnover or EUR 35 million). Organizations must screen their AI portfolio to ensure nothing crosses these lines.
Example
A social media platform using AI to build behavioral profiles that score users' 'trustworthiness' and restrict access to services based on those scores would fall under the prohibited social scoring category if operated by or on behalf of a public authority.
Think of it like...
Prohibited AI practices are like banned substances in sports — there's no 'acceptable dose' and no exception. If you're caught, the penalty is severe regardless of intent.
Related Terms
EU AI Act
The European Union's comprehensive regulatory framework for artificial intelligence, establishing rules based on risk levels. It categorizes AI systems from minimal to unacceptable risk with corresponding compliance requirements.
High-Risk AI System
Under the EU AI Act, an AI system used in sensitive domains — critical infrastructure, education, employment, essential services, law enforcement, migration, or the administration of justice — that must meet strict requirements for risk management, data governance, transparency, human oversight, and accuracy before market deployment.