AI Ethics
As AI systems become more powerful and pervasive, the ethical implications of their design and deployment have moved from academic curiosity to urgent practical concern. Every ML practitioner must understand the ethical frameworks that guide responsible AI development, study real-world failures, and adopt practices that minimize harm.
Why Ethics in AI?
Ethical Frameworks Applied to AI
Three major philosophical traditions offer different lenses for evaluating AI decisions:
1. Consequentialism (Outcomes-Based)
Judges actions by their outcomes. An AI system is ethical if it produces the greatest good for the greatest number.
2. Deontology (Rules-Based)
Judges actions by whether they follow moral rules, regardless of outcome. Certain actions are inherently right or wrong.
3. Virtue Ethics (Character-Based)
Focuses on the character and intentions of the moral agent. Asks: "What would a virtuous AI practitioner do?"
No Single Right Framework
Case Studies in AI Ethics
Case 1: Hiring Bias â Amazon's Resume Screener (2018)
Amazon built an ML model to screen resumes, trained on 10 years of hiring data. The model learned to penalize resumes containing the word "women's" (e.g., "women's chess club") and downrank graduates of all-women's colleges â because the historical data reflected a male-dominated hiring pattern.
Lessons learned:
Case 2: Facial Recognition â Racial Bias
Research by Joy Buolamwini and Timnit Gebru (2018) showed that commercial facial recognition systems had error rates of 0.8% for light-skinned men but 34.7% for dark-skinned women. The training data was overwhelmingly composed of lighter-skinned faces.
Lessons learned:
Case 3: Autonomous Vehicles â The Trolley Problem
Self-driving cars face real-world versions of the trolley problem: when a crash is unavoidable, whose safety should the algorithm prioritize? The MIT Moral Machine experiment surveyed millions of people across cultures and found significant cultural variation in preferences.
Lessons learned:
1# Stakeholder Impact Assessment Framework
2# Use this template before deploying any AI system
3
4class StakeholderImpactAssessment:
5 """Framework for evaluating the ethical impact of an AI system."""
6
7 def __init__(self, system_name: str, description: str):
8 self.system_name = system_name
9 self.description = description
10 self.stakeholders = []
11 self.risks = []
12 self.mitigations = []
13
14 def add_stakeholder(self, group: str, impact_type: str,
15 severity: str, description: str):
16 """Add a stakeholder group affected by the system.
17
18 Args:
19 group: Name of the stakeholder group (e.g., "job applicants")
20 impact_type: "positive", "negative", or "mixed"
21 severity: "low", "medium", "high", or "critical"
22 description: How this group is affected
23 """
24 self.stakeholders.append({
25 "group": group,
26 "impact_type": impact_type,
27 "severity": severity,
28 "description": description,
29 })
30
31 def add_risk(self, category: str, description: str,
32 likelihood: str, impact: str):
33 """Document a potential risk.
34
35 Categories: bias, privacy, safety, transparency,
36 accountability, environmental
37 """
38 self.risks.append({
39 "category": category,
40 "description": description,
41 "likelihood": likelihood,
42 "impact": impact,
43 "risk_score": self._calculate_risk_score(likelihood, impact),
44 })
45
46 def _calculate_risk_score(self, likelihood: str, impact: str) -> str:
47 scores = {"low": 1, "medium": 2, "high": 3, "critical": 4}
48 total = scores.get(likelihood, 0) * scores.get(impact, 0)
49 if total >= 9:
50 return "CRITICAL"
51 elif total >= 6:
52 return "HIGH"
53 elif total >= 3:
54 return "MEDIUM"
55 return "LOW"
56
57 def add_mitigation(self, risk_category: str, strategy: str,
58 owner: str):
59 self.mitigations.append({
60 "risk_category": risk_category,
61 "strategy": strategy,
62 "owner": owner,
63 })
64
65 def generate_report(self) -> str:
66 lines = [
67 f"=== Stakeholder Impact Assessment ===",
68 f"System: {self.system_name}",
69 f"Description: {self.description}",
70 f"",
71 f"--- Stakeholders ({len(self.stakeholders)}) ---",
72 ]
73 for s in self.stakeholders:
74 lines.append(
75 f" [{s['severity'].upper()}] {s['group']} "
76 f"({s['impact_type']}): {s['description']}"
77 )
78 lines.append(f"")
79 lines.append(f"--- Risks ({len(self.risks)}) ---")
80 for r in self.risks:
81 lines.append(
82 f" [{r['risk_score']}] {r['category']}: "
83 f"{r['description']}"
84 )
85 lines.append(f"")
86 lines.append(f"--- Mitigations ({len(self.mitigations)}) ---")
87 for m in self.mitigations:
88 lines.append(
89 f" {m['risk_category']}: {m['strategy']} "
90 f"(Owner: {m['owner']})"
91 )
92 return "\n".join(lines)
93
94
95# --- Example: Loan approval model ---
96sia = StakeholderImpactAssessment(
97 system_name="AutoLoan AI",
98 description="ML model that recommends loan approval/denial"
99)
100
101sia.add_stakeholder(
102 "loan applicants", "mixed", "critical",
103 "Decisions directly affect financial access"
104)
105sia.add_stakeholder(
106 "loan officers", "positive", "medium",
107 "Reduces manual review workload"
108)
109sia.add_stakeholder(
110 "underserved communities", "negative", "high",
111 "Historical bias in training data may perpetuate discrimination"
112)
113
114sia.add_risk(
115 "bias",
116 "Model may discriminate based on race/gender proxies like zip code",
117 "high", "critical"
118)
119sia.add_risk(
120 "transparency",
121 "Applicants cannot understand why they were denied",
122 "high", "high"
123)
124sia.add_risk(
125 "accountability",
126 "Unclear who is responsible when model makes errors",
127 "medium", "high"
128)
129
130sia.add_mitigation(
131 "bias",
132 "Regular fairness audits with disaggregated metrics",
133 "ML Ethics Team"
134)
135sia.add_mitigation(
136 "transparency",
137 "Provide SHAP-based explanations for every denial",
138 "Engineering Team"
139)
140sia.add_mitigation(
141 "accountability",
142 "Human-in-the-loop review for all denials",
143 "Operations Team"
144)
145
146print(sia.generate_report())Responsible Development Practices
The Responsible AI Development Checklist
1. Define the problem clearly â Is AI the right solution? Could a simpler approach work? 2. Audit your data â Check for representation gaps, historical biases, and privacy concerns 3. Conduct a stakeholder impact assessment â Who benefits? Who could be harmed? 4. Test for bias â Use disaggregated metrics across demographic groups 5. Build in transparency â Can you explain the model's decisions? 6. Establish human oversight â Humans should review high-stakes decisions 7. Plan for monitoring â How will you detect problems post-deployment? 8. Document everything â Model cards, data sheets, and decision logs 9. Create feedback channels â How can affected people report issues? 10. Plan for deprecation â When and how will you retire the system?