Ten minutes before the pre-read goes out, the general counsel forwards an email thread marked "URGENT."
A long-standing enterprise customer is demanding an explanation for why they were declined for renewal. They've attached screenshots of a customer service chat that reads like a final decision.
The customer is alleging discrimination.
Your team assumed the chatbot was "just summarizing policy." The transcripts suggest it interpreted eligibility rules, referenced internal data, and delivered an outcome the customer reasonably interpreted as a decision.
Nobody planned to put AI on the board agenda this quarter.
Yet here it is: a reputational question, a legal question, a control question, and a trust question—all wrapped into one.
The Pattern
In these situations, the questions are predictable:
Who authorized this to operate as a decision-maker?
What safeguards were in place?
How do we prevent recurrence?
Directors aren't asking whether the model is "state of the art." They're asking something simpler, and harder.
That's the pattern. Most serious AI incidents become board issues not because a model is "too advanced"—but because an organization accidentally gave it authority without giving it accountability.
AI Failures Are Governance Failures
When AI goes wrong, the root cause is rarely an obscure technical defect.
It's a familiar governance breakdown wearing unfamiliar clothes.
Purpose drift. What began as a drafting tool became a decision tool, and no one paused to reassess risk.
Ownership vacuum. The system sat between functions, "owned" by everyone—and therefore truly owned by no one.
Borrowed controls. Controls were lifted from traditional software and assumed sufficient. But the system behaved differently in edge cases.
Wrong metrics. Monitoring focused on uptime, not outcomes. The system was "working," yet producing unacceptable results.
Missing decision records. The organization could describe how the system worked, but not why it was allowed to do what it did.
These aren't technology problems. They're governance problems: mandate, accountability, approvals, boundaries, escalation, evidence.
The same categories directors already oversee for financial reporting, cybersecurity, and compliance.
AI just adds a new channel through which old problems surface faster.
What Boards Are NOT Expected to Do
Directors should not be expected to:
- Select models, vendors, or architectures
- Review training data or tune parameters
- Approve every use case or prompt library
- Act as a technical steering committee
- Replace management's responsibility with board-level micromanagement
Just as boards don't audit the books line by line, they're not there to debug systems.
The board's role is to set expectations, ensure material risks are controlled, and confirm that the organization can explain and defend its decisions to stakeholders.
The board doesn't need to become technical to be effective. It needs management to be clear, disciplined, and accountable.
Six Questions That Actually Matter
Reasonable oversight isn't a slogan. It's an evidence-based posture.
1. Where is AI used in material workflows?
The organization should have a current inventory of AI use cases affecting customers, employees, financial outcomes, or public statements.
If management can't produce a list, oversight is already behind reality.
2. What authority has been delegated to AI—and what is prohibited?
The critical distinction: assistive systems (drafting, summarizing, recommending) vs. decision systems (approving, denying, pricing).
Clear boundaries, not informal assumptions.
3. Who is accountable for each use case?
Accountability assigned to a named executive owner. "IT owns the tool" is not the same as "the business owns the outcome."
4. What controls exist before deployment?
Not a technology checklist—a decision checklist: risk tiering, testing against known failure modes and edge cases, sign-offs commensurate with risk, documented rationale, clear user instructions.
5. How is performance monitored, and how are exceptions handled?
Monitoring that focuses on outcomes: complaint rates, overrides, adverse events, escalation pathways.
A system that can't be monitored shouldn't be entrusted with material decisions.
6. Can the organization explain decisions to stakeholders?
When challenged by customers, regulators, or media—can management provide a coherent explanation of what the AI did, what humans were expected to do, and why the process was fair?
Explainability here isn't technical transparency. It's managerial defensibility.
The Real Test
A useful test is whether the organization can answer three questions after an incident—quickly and credibly:
What happened?
Why was it allowed to happen?
What will change?
If those answers are vague, governance is the problem.
The Bottom Line
AI will continue to enter organizations in small ways. A productivity feature here. An automation there.
The board issue arises when small tools accumulate into material judgment—without an accompanying increase in oversight.
The solution isn't alarm. It's discipline.
Your objective isn't to eliminate AI risk. It's to ensure the organization can justify the authority it delegates, and correct course when outcomes fall short.
If the system can decide, someone must be responsible.
P.S. If AI is used in customer-facing or decision-adjacent workflows, ask management to bring a one-page inventory and escalation path to the next meeting. If it takes more than a week to assemble, that is the signal.
Shameless plug:
Do you know someone who can benefit by learning the fundamentals of Artificial Intelligence (AI) and Machine Learning (ML)? You are in luck!
I have created a fundamental course on AI/ML where I explain this complex topic is the most simply way - some of my students calls it “oversimplifying”!
Click on this link and gift them the course - and yes, they do not need technical background. I mean it - else they get their money back! Guarantee!

{{ address }}
Unsubscribe · Preferences



