A manager uses AI to draft performance reviews for their team. The AI produces well-structured, professional evaluations based on the notes the manager provides. The employees never know AI was involved. Is this a problem?
It depends on who you ask — and that is exactly the challenge. Rules and policies cannot cover every situation. Ethics fills the gaps.
Four Principles for AI Ethics at Work
1. Fairness
Could this AI output treat anyone unfairly? If you use AI to screen job applications, does it evaluate all candidates by the same real criteria — or has it learned to favor patterns that correlate with demographics rather than qualifications? If you use AI to distribute workload, does it account for all team members equitably?
Fairness does not mean identical treatment. It means that differences in outcome are based on relevant factors, not on irrelevant ones like name, gender, age, or background.
2. Transparency
Would you be comfortable if others knew AI was involved? The manager drafting performance reviews might argue that AI is just a tool, like a word processor. But employees might feel differently — they expect their manager's personal assessment, not a machine's interpretation of notes. Transparency does not always mean disclosure. It means being willing to disclose if asked.
3. Accountability
You own the output, not the AI. If an AI-drafted email contains an error, you cannot blame the tool. If an AI-generated report includes a flawed recommendation, the responsibility is yours. This principle is simple to state and sometimes uncomfortable to live by — but it is non-negotiable.
4. Respect for People
Every person affected by your AI-assisted work deserves to be treated with dignity. Using AI to monitor employees without their knowledge, to generate responses to emotional messages, or to make consequential decisions about people's lives without human review — these cross a line that technology cannot erase.
The Newspaper Test
When you are unsure about an AI use case, try this: imagine a journalist writing a story about how your organization uses AI in this specific way. Would the headline be neutral or damning? If the headline would embarrass you, your team, or your organization, reconsider the approach.
A Five-Step Decision Framework
When you face an AI ethics question without a clear rule to follow:
- Step 1: Identify the stakes. Who could be affected and how?
- Step 2: Check principles. Does this pass the fairness, transparency, accountability, and respect tests?
- Step 3: Check rules. Are there policies, regulations, or professional standards that apply?
- Step 4: Apply the newspaper test. Would you be comfortable with this being public?
- Step 5: Consider alternatives. Is there a way to achieve the same goal with fewer ethical concerns?
When to Escalate
Some situations are too complex or too consequential for individual judgment. Escalate when: the decision affects many people, involves protected information, could create legal liability, or when you are genuinely unsure after working through the framework. Asking for guidance is not a sign of weakness — it is a sign of professional maturity.
Ethics is not about finding perfect answers. It is about taking the time to ask the right questions before you act. In a world where AI makes it easy to move fast, the willingness to pause and reflect is itself an ethical practice.



