-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
180 lines (154 loc) · 4.68 KB
/
index.html
File metadata and controls
180 lines (154 loc) · 4.68 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Responsible AI Inspector Blog</title>
<style>
body {
font-family: system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;
background: #0f172a;
color: #e5e7eb;
line-height: 1.7;
margin: 0;
padding: 0;
}
header {
background: linear-gradient(135deg, #2563eb, #9333ea);
padding: 3rem 1.5rem;
text-align: center;
}
header h1 {
margin: 0;
font-size: 2.5rem;
}
header p {
max-width: 700px;
margin: 1rem auto 0;
font-size: 1.1rem;
opacity: 0.9;
}
main {
max-width: 900px;
margin: 3rem auto;
padding: 0 1.5rem;
}
article {
background: #020617;
border-radius: 12px;
padding: 2rem;
margin-bottom: 2.5rem;
box-shadow: 0 10px 25px rgba(0,0,0,0.4);
}
h2 {
color: #60a5fa;
margin-top: 0;
}
h3 {
color: #a78bfa;
margin-bottom: 0.5rem;
}
.section {
margin-bottom: 1.5rem;
}
.label {
font-weight: bold;
color: #facc15;
}
footer {
text-align: center;
padding: 2rem 1rem;
font-size: 0.9rem;
opacity: 0.7;
}
</style>
</head>
<body>
<header>
<h1> RESPONSIBLE AI REPORT</h1>
<p>
OWEN NYABICHA.
</p>
</header>
<main>
<!-- CASE 1 -->
<article>
<h2>Case 1: Hiring Bot Screening Job Applicants</h2>
<div class="section">
<h3>🔍 What’s happening</h3>
<p>
A company uses an AI-powered hiring system to screen job applicants.
The AI analyzes CVs and work history to decide who moves forward in
the hiring process. The goal is to save time and reduce human workload.
However, the system consistently rejects more female applicants,
especially those with career gaps.
</p>
</div>
<div class="section">
<h3>⚠️ What’s problematic</h3>
<p>
The AI has learned bias from historical hiring data where uninterrupted
career paths were favored. This disadvantages women, caregivers, and
others who take career breaks. The system also lacks transparency,
since applicants are not told why they were rejected and the company
may not be aware of the bias.
</p>
</div>
<div class="section">
<h3>🛠️ One improvement idea</h3>
<p>
Audit and rebalance the training data so career gaps are evaluated
fairly. Add human review for edge cases and provide clearer
explanations of how AI decisions are made.
</p>
</div>
</article>
<!-- CASE 2 -->
<article>
<h2>Case 2: School Proctoring AI Flagging Students</h2>
<div class="section">
<h3>🔍 What’s happening</h3>
<p>
A school uses AI-based proctoring software during online exams.
The system tracks eye movement and facial behavior to detect cheating.
Students who look away frequently are flagged for possible misconduct.
</p>
</div>
<div class="section">
<h3>⚠️ What’s problematic</h3>
<p>
The AI assumes there is only one “normal” way to focus. Neurodivergent
students are flagged more often, leading to unfair accusations and
emotional stress. There are also privacy concerns due to constant
monitoring without clear accountability.
</p>
</div>
<div class="section">
<h3>🛠️ One improvement idea</h3>
<p>
Treat AI detections as signals rather than final decisions.
Require human review for all flags and provide accommodations
or opt-out options for neurodivergent students.
</p>
</div>
</article>
<!-- CONCLUSION -->
<article>
<h2>🧠 Final Verdict</h2>
<p>
These cases show how AI can unintentionally reinforce bias when fairness,
transparency, and accountability are overlooked. Responsible AI requires
thoughtful data design, human oversight, and respect for individual
differences.
</p>
<p>
AI isn’t malicious — it learns from us. That’s why responsible design
matters.
</p>
</article>
</main>
<footer>
© 2026 • Owen Nyabicha • responsible AI
</footer>
</body>
</html>