Socio-Technical System
A system comprising both technical components (algorithms, data, hardware, software) and social components (users, organizations, regulations, cultural norms, power dynamics). AI systems are inherently socio-technical — their behavior and impacts cannot be understood by examining the technology alone.
Why It Matters
Treating AI as purely technical leads to governance blind spots. A model that performs perfectly in the lab can fail in deployment because of how humans interact with it, how organizations incentivize its use, or how social context shapes its impact.
Example
A hospital AI that recommends treatment plans is a socio-technical system: its performance depends not just on model accuracy, but on whether doctors trust it, how nursing workflows incorporate its suggestions, whether patients understand AI-assisted diagnosis, and whether hospital administrators properly maintain the system.
Think of it like...
A socio-technical system is like a car — the engineering matters, but the actual safety outcome depends on the driver's behavior, road conditions, traffic laws, and cultural driving norms. You can't evaluate car safety by bench-testing the engine alone.
Related Terms
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.
Human-in-the-Loop (HITL)
A system design pattern where a human reviews and approves every AI output before any action is taken. HITL provides the maximum level of human oversight but constrains the system's speed and scalability to the pace of human review.
Automation Bias
The tendency for humans to over-rely on automated systems, accepting AI outputs without sufficient scrutiny even when those outputs are wrong. Automation bias increases with system accuracy — the more often the AI is right, the less likely humans are to catch the times it's wrong.