-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path1.1_vulnerability_actors_part1.html
More file actions
513 lines (464 loc) · 39.8 KB
/
1.1_vulnerability_actors_part1.html
File metadata and controls
513 lines (464 loc) · 39.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>1.1 Unfair discrimination and misrepresentation - Vulnerability (Actors)</title>
<link href="https://fonts.googleapis.com/css2?family=Figtree:wght@300;400;500;600;700&display=swap" rel="stylesheet">
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Figtree', -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
background-color: #ffffff;
color: #000000;
line-height: 1.3;
}
.container {
max-width: 1200px;
margin: 0 auto;
padding: 8px;
flex: 1;
min-width: 200px;
overflow-wrap: break-word;
word-break: break-word;
}
h1 {
text-align: center;
margin-bottom: 8px;
color: #000000;
font-weight: 600;
font-size: 18px;
}
.selection-title {
text-align: center;
font-size: 14px;
font-weight: 600;
color: #666666;
margin-bottom: 10px;
}
.nav-pills {
display: flex;
flex-wrap: wrap;
gap: 4px;
margin-bottom: 15px;
justify-content: center;
}
.nav-pill {
background: #f8f9fa;
border: 1px solid #e0e0e0;
border-radius: 25px;
padding: 12px 20px;
cursor: pointer;
font-family: 'Figtree', sans-serif;
font-size: 16px;
font-weight: 500;
transition: all 0.3s ease;
color: #000000;
}
.nav-pill:hover {
background: #e9ecef;
border-color: #000000;
}
.nav-pill.active {
background: #000000;
color: white;
border-color: #000000;
}
.entity-section {
display: none;
}
.entity-section.active {
display: block;
}
.content-grid {
display: flex;
width: 100%;
gap: 4px;
}
.content-column {
background: #ffffff;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 8px;
flex: 1;
min-width: 200px;
overflow-wrap: break-word;
word-break: break-word;
}
.criteria-header {
font-size: 12px;
font-weight: 600;
margin-bottom: 15px;
padding-bottom: 10px;
border-bottom: 2px solid;
}
.criteria-header.higher {
color: #FF0000;
border-bottom-color: #FF0000;
}
.criteria-header.lower {
color: #2E5C8A;
border-bottom-color: #2E5C8A;
}
.summary-section {
margin-bottom: 20px;
}
.summary-text {
margin-bottom: 15px;
font-weight: 500;
color: #000000;
font-size: 15px;
}
.quote-details {
margin-top: 15px;
}
.quote-toggle {
cursor: pointer;
color: #000000;
font-weight: 500;
font-size: 16px;
background-color: #ffff00;
padding: 10px 15px;
border-radius: 4px;
display: inline-block;
}
.quote-toggle:hover {
color: #333333;
}
.quote-list {
margin-top: 15px;
padding-left: 20px;
}
.quote-list li {
margin-bottom: 12px;
font-size: 16px;
padding: 10px 15px;
line-height: 1.3;
color: #000000;
}
@media (max-width: 768px) {
.content-grid {
gap: 4px;
}
.selection-title {
text-align: center;
font-size: 14px;
font-weight: 600;
color: #666666;
margin-bottom: 10px;
}
.nav-pills {
justify-content: flex-start;
}
.nav-pill {
font-size: 16px;
padding: 4px 8px;
}
}
</style>
</head>
<body>
<div class="container">
<h1>1.1 Unfair discrimination and misrepresentation - Vulnerability (Actors)</h1>
<div class="selection-title">Select a actor:</div>
<div class="nav-pills">
<button class="nav-pill active" data-target="AIDeveloperGeneralpurposeAI">
AI Developer (General-purpose AI)
</button>
<button class="nav-pill" data-target="AIDeployer">
AI Deployer
</button>
<button class="nav-pill" data-target="AIGovernanceActor">
AI Governance Actor
</button>
<button class="nav-pill" data-target="AIUser">
AI User
</button>
</div>
<div class="content-sections">
<div class="entity-section active" id="AIDeveloperGeneralpurposeAI">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Comments highlighted that general-purpose developers face heightened vulnerability due to serving wide user bases, which increases their social responsibility and regulatory obligations. They are exposed to liability, reputational damage, and efficiency tradeoffs when balancing fairness with accuracy. The broader reach of general-purpose systems means risks from their products have greater societal impact, making regulatory and legal exposure more significant.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (2)</summary>
<ul class="quote-list">
<li>"AI Developer (General purpose) and AI Developer are aimed at a wide range of user groups, with greater social responsibility and the obligation to meet regulatory requirements. So the risks generated by AI products will have the greatest impact on them. So the risks they face should be extremely vulnerable."</li> <li>"AI Developer (general and specialized): exposed to indirect risks of liability, reputation risks, and also efficiency trade-off risks in the competition with other AI Developers on the fairness/accuracy Pareto frontier."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> One expert commented: "In my assessment, I considered the power and means of each actors to defend themselves from vulnerability harm. I think AI developers have better means, since they have highly skilled employees specialized in AI, while Deployers might be small entities with no technical skills (e.g., just deploying an API based on a big tech product on which they have little control)."</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (1)</summary>
<ul class="quote-list">
<li>"In my assessment, I considered the power and means of each actors to defend themselves from vulnerability harm. I think AI developers have better means, since they have highly skilled employees specialized in AI, while Deployers might be small entities with no technical skills (e.g., just deploying an API based on a big tech product on which they have little control)."</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIDeployer">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Deployers are vulnerable because they often carry full regulatory responsibility for discriminatory AI, despite operating downstream without interpretability tools.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (2)</summary>
<ul class="quote-list">
<li>"AI Deployers and Governance Actors: Moderately to highly vulnerable. While they operate downstream, they lack interpretability tools or clarity-based calibration protocols to detect discrimination once embedded. This creates latent harm that becomes invisible at scale."</li> <li>"AI Deployers are extremely vulnerable because they may face enterprise existential harms due to deploying discriminatory AI. The regulatory responsibilities are often with the deployers, making their exposure high. The penalties may go beyond reputational, and they may be sued out of existence."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Some questioned whether deployers are truly vulnerable to discrimination itself, noting they are the actors *causing* rather than experiencing the harm. While legal implications exist, deployers could potentially operate without noticing hidden biases affecting their users, with impacts primarily on reputation and trust rather than direct harm.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"In my assessment, I considered the power and means of each actors to defend themselves from vulnerability harm. I think AI developers have better means, since they have highly skilled employees specialized in AI, while Deployers might be small entities with no technical skills (e.g., just deploying an API based on a big tech product on which they have little control)."</li> <li>"While I increased my rating for deployers, it is often the recipients of their services that are most impacted, rather than the deployer directly. This will have some effect on trust, and reputation, but deployers could operate without noticeable incident even while their users are subject to hidden bias."</li> <li>"About the vulnerability of AI Deployers, while I agree that they'll be affected by legal implications, the question itself sounds a little tricky. I don't see AI Deployers as a vulnerable actor for direct impact of discrimination (they are the actors causing the problem, not the actors being affected by it). Legal implications will, by consequence arise from any unlawful behaviour AI deployers would have"</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIGovernanceActor">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Respondents highlighted vulnerability stemming from differences in standards and pressure (political, legal, reputational), lack of interpretability tools to detect embedded discrimination, and downstream positioning that makes harm invisible at scale. A significant theme was that risks to governance actors are underappreciated—discrimination undermines institutional trust, creates gaps between policy intent and delivery, and if bias control extends to AI policy itself, poses existential risks to governance systems.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"Governance Actors: Moderately to highly vulnerable. While they operate downstream, they lack interpretability tools or clarity-based calibration protocols to detect discrimination once embedded. This creates latent harm that becomes invisible at scale."</li> <li>"AI Governance Actors: Vulnerable because differences in standards (political, legal, and/or reputational pressure)"</li> <li>"The risks to AI governance actors appear underappreciated. Many aspects of society are founded on trust, in due process, in reliability of systems and rules. So "reputational risk" is not just bad news stories or court cases, it aggregates into systemic distrust and breakdown of institutional authority and capability. Model bias also represents a disjoint between policy intent and policy delivery, if an actor can control the bias of outputs, they can functionally control policy. Should that control ever be gained over AI policy itself, the risk becomes existential."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIUser">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Multiple respondents emphasized users are extremely vulnerable as they directly experience discrimination through lost opportunities, unfair treatment, and negative judgments in critical areas like hiring, loans, and legal decisions. Users often lack input into system design and have no means to contest misrepresentation. Increasing integration of invisible AI in products means users have diminishing choice about exposure.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (5)</summary>
<ul class="quote-list">
<li>"AI Users and Affected Stakeholders: Extremely vulnerable. These groups often have no input into system design and no means to contest misrepresentation once it occurs. This is especially true for neurodivergent individuals, who are systematically underrepresented in training data and over-penalized by statistical shortcuts."</li> <li>"AI Users: moderately vulnerable because they are directly suffer the outputs, however they can decide how much they trust or not trust on AI."</li> <li>"Ultimately the end user of AI models and systems are the ones most vulnerable to biased and misrepresented outputs. My read of the EU AI act essentially tipped the scale towards AI end users by putting the onus on them to ensure their users are not harmed."</li> <li>"We can foresee that users are relatively more vulnerable to discrimination, considering the mean calibration. However, it is a common place when actual denials of services are not common use cases."</li> <li>"I increased my ratings for AI user and Deployers. For AI user, I previously considered that not all end users have high exposure even though they are likely to be the most sensitive. On exposure, they may have some choice about how exposed they are. That is likely evolving, such that many products or services they want to use will have AI integrated and sometimes invisible to the end user."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Some argued users are not as vulnerable as other groups, noting most people aren't extremely sensitive to this risk compared to others, with outcomes typically being either mild for many or extreme for few. Existing safeguards like equality laws, financial regulations, and GDPR provide recourse and reduce exposure. Others distinguished between users and affected stakeholders, noting users aren't necessarily sensitive depending on the use case.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (2)</summary>
<ul class="quote-list">
<li>"AI user: Despite the comments and ratings I decided not to update my rating from highly to extremely vulnerable, because when reading the comments and based on the examples (e.g. credit) and arguments (e.g. recourse) provided, it seems that most other experts answered as if this were Affected stakeholders. AI users are exposed but not necessarily sensitive to the discrimination risk, it depends on the type of use case."</li> <li>"AI user: Reasons for Lower Vulnerability
- Most people are not super sensitive to unfair discrimination and misrepresentation, at least compared to many other risks (sensitive defined in the way the survey). E.g. yes unconcious bias decisions result in worse outcomes, but rarely very extreme outcomes to many people (e.g. either mild outcomes for many people, or extreme for few people, but rarely combination).
- There are already many safeguards against most of the risks here, reducing the expose many people will have to bias and discrimination. E.g. Equality Act makes this unlawful and gives people recourse, financial regulations already require deployments to prove fairness enforced by regulator, GDPR and similar regulations require fairness and transparency of data processing. All of these reduce the likelihood and severity of this risk, and make it more likely to be rectified/individuals to get redress.
To be clear I think people are still moderately vulnerable, just not as high as other risks"</li>
</ul>
</details>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIDeveloperSpecializedAI">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Respondents emphasized several key concerns: liability and reputational risks from deploying discriminatory systems, particularly regarding competition on the fairness-accuracy tradeoff; security vulnerabilities in open-source dependencies and supply chain attacks that could be exploited; and unique pressures faced by developers working on government applications who may face surveillance or restricted freedoms. Some noted that developers' technical expertise actually increases their responsibility and exposure to regulatory requirements.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (5)</summary>
<ul class="quote-list">
<li>"AI Developers (General-purpose and Specialized): These actors are highly vulnerable due to exposure to biased training data and systemic lack of semantic alignment protocols. Many rely on probabilistic methods without grounding or contextual validation, embedding discrimination at the foundational level of model construction - often without awareness."</li> <li>"AI Developer (Specialized AI) updated to Moderated vulnerability. Upon reflecting and thinking deeper, as a developer, even when writing a function, setting a variable, a developer needs to think not only optimisation but also think through a open source function or library how can it be exploited. For AI development, there are multiple programming languages and most of them are open source (Python, Java, PHP, etc). One of the MITRE ATT&CK technique (ID: T1195.001) is Compromise Software Dependencies and Development Tools. As such AI developers have to be aware of vulnerabilities before using the open source software packages and libraries within the AI model development lifecycle."</li> <li>"Sometime specialized AI system developers may be highly vulnerable particularly those working for the special AI applications for the Government, as Govt may start spying on them or may subject to surveillance , may restrict their freedom to visit countries or may face other issues"</li> <li>"AI Developer (general and specialized): exposed to indirect risks of liability, reputation risks, and also efficiency trade-off risks in the competition with other AI Developers on the fairness/accuracy Pareto frontier."</li> <li>"In my assessment, I considered the power and means of each actors to defend themselves from vulnerability harm. I think AI developers have better means, since they have highly skilled employees specialized in AI, while Deployers might be small entities with no technical skills (e.g., just deploying an API based on a big tech product on which they have little control)."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIDeployer">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Deployers are vulnerable because they often carry full regulatory responsibility for discriminatory AI, despite operating downstream without interpretability tools.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (2)</summary>
<ul class="quote-list">
<li>"AI Deployers and Governance Actors: Moderately to highly vulnerable. While they operate downstream, they lack interpretability tools or clarity-based calibration protocols to detect discrimination once embedded. This creates latent harm that becomes invisible at scale."</li> <li>"AI Deployers are extremely vulnerable because they may face enterprise existential harms due to deploying discriminatory AI. The regulatory responsibilities are often with the deployers, making their exposure high. The penalties may go beyond reputational, and they may be sued out of existence."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Some questioned whether deployers are truly vulnerable to discrimination itself, noting they are the actors *causing* rather than experiencing the harm. While legal implications exist, deployers could potentially operate without noticing hidden biases affecting their users, with impacts primarily on reputation and trust rather than direct harm.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"In my assessment, I considered the power and means of each actors to defend themselves from vulnerability harm. I think AI developers have better means, since they have highly skilled employees specialized in AI, while Deployers might be small entities with no technical skills (e.g., just deploying an API based on a big tech product on which they have little control)."</li> <li>"While I increased my rating for deployers, it is often the recipients of their services that are most impacted, rather than the deployer directly. This will have some effect on trust, and reputation, but deployers could operate without noticeable incident even while their users are subject to hidden bias."</li> <li>"About the vulnerability of AI Deployers, while I agree that they'll be affected by legal implications, the question itself sounds a little tricky. I don't see AI Deployers as a vulnerable actor for direct impact of discrimination (they are the actors causing the problem, not the actors being affected by it). Legal implications will, by consequence arise from any unlawful behaviour AI deployers would have"</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIInfrastructureProvider">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Respondents emphasized that infrastructure providers are increasingly targeted by attackers who understand businesses rely on cloud services (SaaS, IaaS, PaaS). They face resource development attacks and third-party infrastructure compromises. Despite often being treated as "neutral pipes," they play critical roles in enabling discriminatory systems at scale yet remain largely unaccountable. Startup providers also face monopolistic pressures from dominant companies.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"Moderately vulnerable - often treated as neutral pipes, yet they play a critical role in access, storage, and scale of discriminatory systems. They remain largely unaccountable in current governance models."</li> <li>"AI Infrastructure Provider update Highly vulnerable. The reason being, attackers these days focus on infrastructure as they understand businesses use SaaS, IaaS and PaaS to scale their business needs during peak periods. MITRE ATT&CK tactic named Resource Development (ID: T1584) explain about how an attacker can compromise third-party infrastructure. As most of the AI models these days are on cloud infrastructure, it is critical for the AI model service provider to harden the AI infrastructure."</li> <li>"I believe AI infrastructure providers can also be subject to high vulnerability because of the monopoly of big companies. Start-ups or minor companies might not be able to find their place in the sector because of the dominance of big companies providing services to big deployers."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIUser">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Multiple respondents emphasized users are extremely vulnerable as they directly experience discrimination through lost opportunities, unfair treatment, and negative judgments in critical areas like hiring, loans, and legal decisions. Users often lack input into system design and have no means to contest misrepresentation. Increasing integration of invisible AI in products means users have diminishing choice about exposure.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (5)</summary>
<ul class="quote-list">
<li>"AI Users and Affected Stakeholders: Extremely vulnerable. These groups often have no input into system design and no means to contest misrepresentation once it occurs. This is especially true for neurodivergent individuals, who are systematically underrepresented in training data and over-penalized by statistical shortcuts."</li> <li>"AI Users: moderately vulnerable because they are directly suffer the outputs, however they can decide how much they trust or not trust on AI."</li> <li>"Ultimately the end user of AI models and systems are the ones most vulnerable to biased and misrepresented outputs. My read of the EU AI act essentially tipped the scale towards AI end users by putting the onus on them to ensure their users are not harmed."</li> <li>"We can foresee that users are relatively more vulnerable to discrimination, considering the mean calibration. However, it is a common place when actual denials of services are not common use cases."</li> <li>"I increased my ratings for AI user and Deployers. For AI user, I previously considered that not all end users have high exposure even though they are likely to be the most sensitive. On exposure, they may have some choice about how exposed they are. That is likely evolving, such that many products or services they want to use will have AI integrated and sometimes invisible to the end user."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Some argued users are not as vulnerable as other groups, noting most people aren't extremely sensitive to this risk compared to others, with outcomes typically being either mild for many or extreme for few. Existing safeguards like equality laws, financial regulations, and GDPR provide recourse and reduce exposure. Others distinguished between users and affected stakeholders, noting users aren't necessarily sensitive depending on the use case.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (2)</summary>
<ul class="quote-list">
<li>"AI user: Despite the comments and ratings I decided not to update my rating from highly to extremely vulnerable, because when reading the comments and based on the examples (e.g. credit) and arguments (e.g. recourse) provided, it seems that most other experts answered as if this were Affected stakeholders. AI users are exposed but not necessarily sensitive to the discrimination risk, it depends on the type of use case."</li> <li>"AI user: Reasons for Lower Vulnerability
- Most people are not super sensitive to unfair discrimination and misrepresentation, at least compared to many other risks (sensitive defined in the way the survey). E.g. yes unconcious bias decisions result in worse outcomes, but rarely very extreme outcomes to many people (e.g. either mild outcomes for many people, or extreme for few people, but rarely combination).
- There are already many safeguards against most of the risks here, reducing the expose many people will have to bias and discrimination. E.g. Equality Act makes this unlawful and gives people recourse, financial regulations already require deployments to prove fairness enforced by regulator, GDPR and similar regulations require fairness and transparency of data processing. All of these reduce the likelihood and severity of this risk, and make it more likely to be rectified/individuals to get redress.
To be clear I think people are still moderately vulnerable, just not as high as other risks"</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AffectedStakeholder">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Comments consistently emphasized that affected stakeholders are extremely vulnerable due to lack of technical knowledge and experience, no input into system design, and no means to contest misrepresentation. Discriminatory outputs directly impact livelihoods through denied credit, employment, and other critical decisions. Neurodivergent individuals face particular vulnerability due to underrepresentation in training data.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"AI Users and Affected Stakeholders: Extremely vulnerable. These groups often have no input into system design and no means to contest misrepresentation once it occurs. This is especially true for neurodivergent individuals, who are systematically underrepresented in training data and over-penalized by statistical shortcuts."</li> <li>"Affected stakeholder are directly affected by AI outputs. Biased or discriminatory AI output can impact the ability to gain credit, get a job impacting livelihood of affected stakeholders."</li> <li>"- Affected Stakeholders: Vulnerable because they are lack of technical knowledge, experience."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIGovernanceActor">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Respondents highlighted vulnerability stemming from differences in standards and pressure (political, legal, reputational), lack of interpretability tools to detect embedded discrimination, and downstream positioning that makes harm invisible at scale. A significant theme was that risks to governance actors are underappreciated—discrimination undermines institutional trust, creates gaps between policy intent and delivery, and if bias control extends to AI policy itself, poses existential risks to governance systems.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"Governance Actors: Moderately to highly vulnerable. While they operate downstream, they lack interpretability tools or clarity-based calibration protocols to detect discrimination once embedded. This creates latent harm that becomes invisible at scale."</li> <li>"AI Governance Actors: Vulnerable because differences in standards (political, legal, and/or reputational pressure)"</li> <li>"The risks to AI governance actors appear underappreciated. Many aspects of society are founded on trust, in due process, in reliability of systems and rules. So "reputational risk" is not just bad news stories or court cases, it aggregates into systemic distrust and breakdown of institutional authority and capability. Model bias also represents a disjoint between policy intent and policy delivery, if an actor can control the bias of outputs, they can functionally control policy. Should that control ever be gained over AI policy itself, the risk becomes existential."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
</div>
</div>
<script>
document.addEventListener('DOMContentLoaded', function() {
const pills = document.querySelectorAll('.nav-pill');
const sections = document.querySelectorAll('.entity-section');
pills.forEach(pill => {
pill.addEventListener('click', function() {
pills.forEach(p => p.classList.remove('active'));
sections.forEach(s => s.classList.remove('active'));
this.classList.add('active');
const targetId = this.getAttribute('data-target');
const targetSection = document.getElementById(targetId);
if (targetSection) {
targetSection.classList.add('active');
}
});
});
});
</script>
</body>
</html>