-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path4.1_vulnerability_actors_part2.html
More file actions
511 lines (448 loc) · 47 KB
/
4.1_vulnerability_actors_part2.html
File metadata and controls
511 lines (448 loc) · 47 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>4.1 Disinformation, surveillance, and influence at scale - Vulnerability (Actors)</title>
<link href="https://fonts.googleapis.com/css2?family=Figtree:wght@300;400;500;600;700&display=swap" rel="stylesheet">
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Figtree', -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
background-color: #ffffff;
color: #000000;
line-height: 1.3;
}
.container {
max-width: 1200px;
margin: 0 auto;
padding: 8px;
flex: 1;
min-width: 200px;
overflow-wrap: break-word;
word-break: break-word;
}
h1 {
text-align: center;
margin-bottom: 8px;
color: #000000;
font-weight: 600;
font-size: 18px;
}
.selection-title {
text-align: center;
font-size: 14px;
font-weight: 600;
color: #666666;
margin-bottom: 10px;
}
.nav-pills {
display: flex;
flex-wrap: wrap;
gap: 4px;
margin-bottom: 15px;
justify-content: center;
}
.nav-pill {
background: #f8f9fa;
border: 1px solid #e0e0e0;
border-radius: 25px;
padding: 12px 20px;
cursor: pointer;
font-family: 'Figtree', sans-serif;
font-size: 16px;
font-weight: 500;
transition: all 0.3s ease;
color: #000000;
}
.nav-pill:hover {
background: #e9ecef;
border-color: #000000;
}
.nav-pill.active {
background: #000000;
color: white;
border-color: #000000;
}
.entity-section {
display: none;
}
.entity-section.active {
display: block;
}
.content-grid {
display: flex;
width: 100%;
gap: 4px;
}
.content-column {
background: #ffffff;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 8px;
flex: 1;
min-width: 200px;
overflow-wrap: break-word;
word-break: break-word;
}
.criteria-header {
font-size: 12px;
font-weight: 600;
margin-bottom: 15px;
padding-bottom: 10px;
border-bottom: 2px solid;
}
.criteria-header.higher {
color: #FF0000;
border-bottom-color: #FF0000;
}
.criteria-header.lower {
color: #2E5C8A;
border-bottom-color: #2E5C8A;
}
.summary-section {
margin-bottom: 20px;
}
.summary-text {
margin-bottom: 15px;
font-weight: 500;
color: #000000;
font-size: 15px;
}
.quote-details {
margin-top: 15px;
}
.quote-toggle {
cursor: pointer;
color: #000000;
font-weight: 500;
font-size: 16px;
background-color: #ffff00;
padding: 10px 15px;
border-radius: 4px;
display: inline-block;
}
.quote-toggle:hover {
color: #333333;
}
.quote-list {
margin-top: 15px;
padding-left: 20px;
}
.quote-list li {
margin-bottom: 12px;
font-size: 16px;
padding: 10px 15px;
line-height: 1.3;
color: #000000;
}
@media (max-width: 768px) {
.content-grid {
gap: 4px;
}
.selection-title {
text-align: center;
font-size: 14px;
font-weight: 600;
color: #666666;
margin-bottom: 10px;
}
.nav-pills {
justify-content: flex-start;
}
.nav-pill {
font-size: 16px;
padding: 4px 8px;
}
}
</style>
</head>
<body>
<div class="container">
<h1>4.1 Disinformation, surveillance, and influence at scale - Vulnerability (Actors)</h1>
<div class="selection-title">Select a actor:</div>
<div class="nav-pills">
<button class="nav-pill active" data-target="AIDeveloperSpecializedAI">
AI Developer (Specialized AI)
</button>
<button class="nav-pill" data-target="AIInfrastructureProvider">
AI Infrastructure Provider
</button>
<button class="nav-pill" data-target="AffectedStakeholder">
Affected Stakeholder
</button>
</div>
<div class="content-sections">
<div class="entity-section active" id="AIDeveloperSpecializedAI">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Experts highlight specialized developers are vulnerable to manipulation by foreign states, organized crime, and competitors despite assumptions of insulation. Their narrow models in sensitive sectors (health, defense, finance) face targeted disinformation campaigns and surveillance risks. Developers are as susceptible as end users to AI spycraft exploiting human biases - from passive surveillance (Samsung chip design leaks via ChatGPT) to active attacks. Specialized developers in ad-tech and surveillance domains are particularly exposed as their systems are built for persuasion.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"These responses reflect my view that vulnerability must be measured not only by technical exposure but by indirect causal chains and institutional fragility. Some actors remain underclassified if only direct harm or system access is considered.
AI Developer (Specialized): I selected "Highly vulnerable" due to the assumption that specialization provides insulation. In reality, narrow models can be manipulated or misused in targeted disinformation campaigns, particularly in sectors like health, defense, or finance."</li> <li>"This risk can manifest from foreign state actors, organised crime and competing corporations. So its not just about whether an AI company can manipulate users. Its about whether the humans inside any organisation, and hence the organisation, are vulnerable to disinformation, surveillance and influence at scale. They absolutely are.
This can range from passive surveilence, like when samsung workers accidentally leaked chip design information to openAI through chatgpt which has a competitor chip design arm. Through to active surveillance like when deployed agents are "living off the land" to conduct cyber attacks. Through to algorithmic influence campaigns like tiktok is accused of using on US teenagers to sway their views. A particularly intense form comes from the parasocial relationships people are forming with the sexbot avatars of corporations and foreign state actors, which can seduce people, make them fall in love, and then subtley extract information or sway their views.
There is no fundamental difference in the level of succeptability of humans to this based on where in the AI development chain they sit. Programmers, CEOs and ministers are just as vulnerable as end users, since its effectively AI spycraft preying on human biases and fears.
At a mass scale this technique could be used to puppeteer the emotions and thoughts of vast swathes of a country, giving the controlling entity proxy lobbying power within organisations and voting power within democracies."</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Specialized devs (Extreme) (ad-tech, psychographic, FR/OSINT) are built for persuasion/surveillance."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIInfrastructureProvider">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Experts emphasize infrastructure providers are highly vulnerable strategic targets for powerful actors seeking control through disinformation. They facilitate global-scale operations without ability to monitor data misuse, creating system-wide exposure. As central ecosystem players hosting AI operations, they're high-value targets that, if compromised, enable mass surveillance and disinformation while supplying elastic capacity for bot farms.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"These responses reflect my view that vulnerability must be measured not only by technical exposure but by indirect causal chains and institutional fragility. Some actors remain underclassified if only direct harm or system access is considered.
AI Infrastructure Provider: I selected "Highly vulnerable" because they facilitate global-scale data transfer and processing without being able to monitor the meaning or misuse of the data they carry. This latent exposure is structural and system-wide."</li> <li>"I've rated AI infrastructure providers as higher than the average as well - I think these providers (e.g Nvidia) are strategically important for some very powerful actors (e.g US & China), and so could be the subject of coordinated disinformation ops to undermine or to gain control of their operations."</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Infra (High) supplies elastic capacity for bot farms and content mills."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> One expert commented: "I didn't change my ratings. I still believe that AI developers and infrastructure providers etc. tend to have less exposure and vulnerability to disinformation risks. They have the normal exposure of a company to using LLMs and getting wrong info etc. but they would tend to be quite suited to mitigate that risk. The only major risks to them are reputational and we are seeing generally that those risks are not affecting major AI developers. They are highly important in the information ecosystem but I do not think they are vulnerable to disinformation in the way other institutions are. AI deployers are more vulnerable because they may take on more liability but again they are dictating use and are more likely to perpetuate disinformation that to be harmed by it. If thinking about OpenAI and then a new company deploying specialized tools from them, Open AI maybe has a reputational risk from disinformation but it is actually likely to just drive more engagement with their models if untruths etc. are being spread through their model, the new deployer of their specialized tools might take on more harm and liability from disinformation but again it is likely to help serve specific interests and the reputational harm is likely not going to disincentivize their use of the tool and spread of disinformation over the added control over the information environment that such a tool might give them."</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (1)</summary>
<ul class="quote-list">
<li>"I didn't change my ratings. I still believe that AI developers and infrastructure providers etc. tend to have less exposure and vulnerability to disinformation risks. They have the normal exposure of a company to using LLMs and getting wrong info etc. but they would tend to be quite suited to mitigate that risk. The only major risks to them are reputational and we are seeing generally that those risks are not affecting major AI developers. They are highly important in the information ecosystem but I do not think they are vulnerable to disinformation in the way other institutions are. AI deployers are more vulnerable because they may take on more liability but again they are dictating use and are more likely to perpetuate disinformation that to be harmed by it. If thinking about OpenAI and then a new company deploying specialized tools from them, Open AI maybe has a reputational risk from disinformation but it is actually likely to just drive more engagement with their models if untruths etc. are being spread through their model, the new deployer of their specialized tools might take on more harm and liability from disinformation but again it is likely to help serve specific interests and the reputational harm is likely not going to disincentivize their use of the tool and spread of disinformation over the added control over the information environment that such a tool might give them."</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AffectedStakeholder">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Affected stakeholders including the general public are directly impacted by AI-driven disinformation and surveillance, often lacking awareness, protection, or recourse, making them highly vulnerable to psychological, social, and political harm. They bear the impact at population scale and certain stakeholders could be targets of the worst forms of disinformation leading to political violence. They also play a role in amplifying disinformation—messages are quickly picked up and amplified by independent communities, boosting impact exponentially and organically.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (4)</summary>
<ul class="quote-list">
<li>"This risk can manifest from foreign state actors, organised crime and competing corporations. So its not just about whether an AI company can manipulate users. Its about whether the humans inside any organisation, and hence the organisation, are vulnerable to disinformation, surveillance and influence at scale. They absolutely are.
This can range from passive surveilence, like when samsung workers accidentally leaked chip design information to openAI through chatgpt which has a competitor chip design arm. Through to active surveillance like when deployed agents are "living off the land" to conduct cyber attacks. Through to algorithmic influence campaigns like tiktok is accused of using on US teenagers to sway their views. A particularly intense form comes from the parasocial relationships people are forming with the sexbot avatars of corporations and foreign state actors, which can seduce people, make them fall in love, and then subtley extract information or sway their views.
There is no fundamental difference in the level of succeptability of humans to this based on where in the AI development chain they sit. Programmers, CEOs and ministers are just as vulnerable as end users, since its effectively AI spycraft preying on human biases and fears.
At a mass scale this technique could be used to puppeteer the emotions and thoughts of vast swathes of a country, giving the controlling entity proxy lobbying power within organisations and voting power within democracies."</li> <li>"AI Users and Affected Stakeholders are most vulnerable since they're often targets of disinformation campaigns and lack the technical expertise to identify manipulated content. Developers and deployers face reputational risks but have more resources to mitigate harm."</li> <li>"I am persuaded that certain affected stakeholders could be targets of some of the worst forms of disinformation, e.g., leading to political violence, which makes them extremely vulnerable"</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Affected stakeholders (Extreme) bear the impact at population scale."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIDeveloperSpecializedAI">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Experts highlight specialized developers are vulnerable to manipulation by foreign states, organized crime, and competitors despite assumptions of insulation. Their narrow models in sensitive sectors (health, defense, finance) face targeted disinformation campaigns and surveillance risks. Developers are as susceptible as end users to AI spycraft exploiting human biases - from passive surveillance (Samsung chip design leaks via ChatGPT) to active attacks. Specialized developers in ad-tech and surveillance domains are particularly exposed as their systems are built for persuasion.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"These responses reflect my view that vulnerability must be measured not only by technical exposure but by indirect causal chains and institutional fragility. Some actors remain underclassified if only direct harm or system access is considered.
AI Developer (Specialized): I selected "Highly vulnerable" due to the assumption that specialization provides insulation. In reality, narrow models can be manipulated or misused in targeted disinformation campaigns, particularly in sectors like health, defense, or finance."</li> <li>"This risk can manifest from foreign state actors, organised crime and competing corporations. So its not just about whether an AI company can manipulate users. Its about whether the humans inside any organisation, and hence the organisation, are vulnerable to disinformation, surveillance and influence at scale. They absolutely are.
This can range from passive surveilence, like when samsung workers accidentally leaked chip design information to openAI through chatgpt which has a competitor chip design arm. Through to active surveillance like when deployed agents are "living off the land" to conduct cyber attacks. Through to algorithmic influence campaigns like tiktok is accused of using on US teenagers to sway their views. A particularly intense form comes from the parasocial relationships people are forming with the sexbot avatars of corporations and foreign state actors, which can seduce people, make them fall in love, and then subtley extract information or sway their views.
There is no fundamental difference in the level of succeptability of humans to this based on where in the AI development chain they sit. Programmers, CEOs and ministers are just as vulnerable as end users, since its effectively AI spycraft preying on human biases and fears.
At a mass scale this technique could be used to puppeteer the emotions and thoughts of vast swathes of a country, giving the controlling entity proxy lobbying power within organisations and voting power within democracies."</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Specialized devs (Extreme) (ad-tech, psychographic, FR/OSINT) are built for persuasion/surveillance."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIDeployer">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Deployers operate without transparency, legal oversight, or rigorous update protocols, amplifying their vulnerability through dependence on upstream models and downstream consequences they cannot fully control. They own critical control points including ranking algorithms, recommender systems, APIs, and agent/plugin infrastructure—the key levers for achieving scale in disinformation campaigns. All humans within deploying organizations are equally susceptible to manipulation regardless of role, from passive surveillance to active influence operations.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"These responses reflect my view that vulnerability must be measured not only by technical exposure but by indirect causal chains and institutional fragility. Some actors remain underclassified if only direct harm or system access is considered.
AI Deployer: I selected "Extremely vulnerable" rather than "Highly" because deployers often operate without transparency, legal oversight, or rigorous update protocols. Their vulnerability is amplified by their dependence on upstream models and downstream consequences they cannot fully control."</li> <li>"This risk can manifest from foreign state actors, organised crime and competing corporations. So its not just about whether an AI company can manipulate users. Its about whether the humans inside any organisation, and hence the organisation, are vulnerable to disinformation, surveillance and influence at scale. They absolutely are.
This can range from passive surveilence, like when samsung workers accidentally leaked chip design information to openAI through chatgpt which has a competitor chip design arm. Through to active surveillance like when deployed agents are "living off the land" to conduct cyber attacks. Through to algorithmic influence campaigns like tiktok is accused of using on US teenagers to sway their views. A particularly intense form comes from the parasocial relationships people are forming with the sexbot avatars of corporations and foreign state actors, which can seduce people, make them fall in love, and then subtley extract information or sway their views.
There is no fundamental difference in the level of succeptability of humans to this based on where in the AI development chain they sit. Programmers, CEOs and ministers are just as vulnerable as end users, since its effectively AI spycraft preying on human biases and fears.
At a mass scale this technique could be used to puppeteer the emotions and thoughts of vast swathes of a country, giving the controlling entity proxy lobbying power within organisations and voting power within democracies."</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Deployers (Extreme) own ranking, recommender knobs, APIs, and agent/plugin rails the lever arm for scale."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Deployers have less vulnerability than average experts suggest because they face mainly reputational risks and actually may benefit from disinformation (e.g., political polarization increasing engagement for Meta products). They're more likely to perpetuate disinformation than be harmed by it, and any reputational harm likely won't disincentivize their use given the control over information environments that tools provide. They have too much leverage over dissemination processes to be vulnerable themselves.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (4)</summary>
<ul class="quote-list">
<li>"I've placed AI deployers and AI Users as minimally vulnerable, in contrast to the average expert. While these actors have high exposure, it's not clear to me that they have high sensitivity relative to other actors. It's not clear to me why AI users (software engineers using copilot) or deployers (JP morgan using fraud detection) will be harmed if the threats manifest (other than in so far as they are run by individuals who may be harmed in general, which is true for all actors). Large-scale disinformation campaigns targeting political processes and public opinion may even benefit some private AI users and deployers. E.g political polarisation may be good for engagement for Meta products."</li> <li>"I didn't change my ratings. I still believe that AI developers and infrastructure providers etc. tend to have less exposure and vulnerability to disinformation risks. They have the normal exposure of a company to using LLMs and getting wrong info etc. but they would tend to be quite suited to mitigate that risk. The only major risks to them are reputational and we are seeing generally that those risks are not affecting major AI developers. They are highly important in the information ecosystem but I do not think they are vulnerable to disinformation in the way other institutions are. AI deployers are more vulnerable because they may take on more liability but again they are dictating use and are more likely to perpetuate disinformation that to be harmed by it. If thinking about OpenAI and then a new company deploying specialized tools from them, Open AI maybe has a reputational risk from disinformation but it is actually likely to just drive more engagement with their models if untruths etc. are being spread through their model, the new deployer of their specialized tools might take on more harm and liability from disinformation but again it is likely to help serve specific interests and the reputational harm is likely not going to disincentivize their use of the tool and spread of disinformation over the added control over the information environment that such a tool might give them."</li> <li>"AI Users and Affected Stakeholders are most vulnerable since they're often targets of disinformation campaigns and lack the technical expertise to identify manipulated content. Developers and deployers face reputational risks but have more resources to mitigate harm."</li> <li>"My vulnerability ratings for (GP)AI developers and deployers were lowered. They have too much leverage over the process of this dissemination/surveillance to, in most plausible instantiations of it, be vulnerable to it themselves. I also do not believe the indirect risk of reputational damage is significant at all: they are in an accountability "sweet spot" of likely not being attributed much blame even in the case of some hypothetical unmasking of a disinformation campaign, so long as they are not its initiator."</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIInfrastructureProvider">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Experts emphasize infrastructure providers are highly vulnerable strategic targets for powerful actors seeking control through disinformation. They facilitate global-scale operations without ability to monitor data misuse, creating system-wide exposure. As central ecosystem players hosting AI operations, they're high-value targets that, if compromised, enable mass surveillance and disinformation while supplying elastic capacity for bot farms.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"These responses reflect my view that vulnerability must be measured not only by technical exposure but by indirect causal chains and institutional fragility. Some actors remain underclassified if only direct harm or system access is considered.
AI Infrastructure Provider: I selected "Highly vulnerable" because they facilitate global-scale data transfer and processing without being able to monitor the meaning or misuse of the data they carry. This latent exposure is structural and system-wide."</li> <li>"I've rated AI infrastructure providers as higher than the average as well - I think these providers (e.g Nvidia) are strategically important for some very powerful actors (e.g US & China), and so could be the subject of coordinated disinformation ops to undermine or to gain control of their operations."</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Infra (High) supplies elastic capacity for bot farms and content mills."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> One expert commented: "I didn't change my ratings. I still believe that AI developers and infrastructure providers etc. tend to have less exposure and vulnerability to disinformation risks. They have the normal exposure of a company to using LLMs and getting wrong info etc. but they would tend to be quite suited to mitigate that risk. The only major risks to them are reputational and we are seeing generally that those risks are not affecting major AI developers. They are highly important in the information ecosystem but I do not think they are vulnerable to disinformation in the way other institutions are. AI deployers are more vulnerable because they may take on more liability but again they are dictating use and are more likely to perpetuate disinformation that to be harmed by it. If thinking about OpenAI and then a new company deploying specialized tools from them, Open AI maybe has a reputational risk from disinformation but it is actually likely to just drive more engagement with their models if untruths etc. are being spread through their model, the new deployer of their specialized tools might take on more harm and liability from disinformation but again it is likely to help serve specific interests and the reputational harm is likely not going to disincentivize their use of the tool and spread of disinformation over the added control over the information environment that such a tool might give them."</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (1)</summary>
<ul class="quote-list">
<li>"I didn't change my ratings. I still believe that AI developers and infrastructure providers etc. tend to have less exposure and vulnerability to disinformation risks. They have the normal exposure of a company to using LLMs and getting wrong info etc. but they would tend to be quite suited to mitigate that risk. The only major risks to them are reputational and we are seeing generally that those risks are not affecting major AI developers. They are highly important in the information ecosystem but I do not think they are vulnerable to disinformation in the way other institutions are. AI deployers are more vulnerable because they may take on more liability but again they are dictating use and are more likely to perpetuate disinformation that to be harmed by it. If thinking about OpenAI and then a new company deploying specialized tools from them, Open AI maybe has a reputational risk from disinformation but it is actually likely to just drive more engagement with their models if untruths etc. are being spread through their model, the new deployer of their specialized tools might take on more harm and liability from disinformation but again it is likely to help serve specific interests and the reputational harm is likely not going to disincentivize their use of the tool and spread of disinformation over the added control over the information environment that such a tool might give them."</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIUser">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Experts note users are vulnerable as they're unaware how AI shapes their information environment and lack control over algorithmic curation. They're susceptible to manipulation, privacy violations, and can be co-opted as disinformation amplifiers. Some jailbreak models to run disinformation campaigns as AI lowers entry barriers. At scale, these techniques could manipulate emotions across entire populations.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (3)</summary>
<ul class="quote-list">
<li>"This seems to me a major risk for end-users. However, saying they are extremely vulnerable suggests to me that there is nothing that can be done about it. I am somewhat optimistic that an infrastructure can be set up to mitigate the effects of misinformation on their epistemics and values, which is why I say highly vulnerable instead."</li> <li>"AI Users and Affected Stakeholders are most vulnerable since they're often targets of disinformation campaigns and lack the technical expertise to identify manipulated content. Developers and deployers face reputational risks but have more resources to mitigate harm."</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Users (High) are co-optable and phishable as amplifiers."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> One expert commented: "I've placed AI deployers and AI Users as minimally vulnerable, in contrast to the average expert. While these actors have high exposure, it's not clear to me that they have high sensitivity relative to other actors. It's not clear to me why AI users (software engineers using copilot) or deployers (JP morgan using fraud detection) will be harmed if the threats manifest (other than in so far as they are run by individuals who may be harmed in general, which is true for all actors). Large-scale disinformation campaigns targeting political processes and public opinion may even benefit some private AI users and deployers. E.g political polarisation may be good for engagement for Meta products. "</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (1)</summary>
<ul class="quote-list">
<li>"I've placed AI deployers and AI Users as minimally vulnerable, in contrast to the average expert. While these actors have high exposure, it's not clear to me that they have high sensitivity relative to other actors. It's not clear to me why AI users (software engineers using copilot) or deployers (JP morgan using fraud detection) will be harmed if the threats manifest (other than in so far as they are run by individuals who may be harmed in general, which is true for all actors). Large-scale disinformation campaigns targeting political processes and public opinion may even benefit some private AI users and deployers. E.g political polarisation may be good for engagement for Meta products."</li>
</ul>
</details>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AffectedStakeholder">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Affected stakeholders including the general public are directly impacted by AI-driven disinformation and surveillance, often lacking awareness, protection, or recourse, making them highly vulnerable to psychological, social, and political harm. They bear the impact at population scale and certain stakeholders could be targets of the worst forms of disinformation leading to political violence. They also play a role in amplifying disinformation—messages are quickly picked up and amplified by independent communities, boosting impact exponentially and organically.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (4)</summary>
<ul class="quote-list">
<li>"This risk can manifest from foreign state actors, organised crime and competing corporations. So its not just about whether an AI company can manipulate users. Its about whether the humans inside any organisation, and hence the organisation, are vulnerable to disinformation, surveillance and influence at scale. They absolutely are.
This can range from passive surveilence, like when samsung workers accidentally leaked chip design information to openAI through chatgpt which has a competitor chip design arm. Through to active surveillance like when deployed agents are "living off the land" to conduct cyber attacks. Through to algorithmic influence campaigns like tiktok is accused of using on US teenagers to sway their views. A particularly intense form comes from the parasocial relationships people are forming with the sexbot avatars of corporations and foreign state actors, which can seduce people, make them fall in love, and then subtley extract information or sway their views.
There is no fundamental difference in the level of succeptability of humans to this based on where in the AI development chain they sit. Programmers, CEOs and ministers are just as vulnerable as end users, since its effectively AI spycraft preying on human biases and fears.
At a mass scale this technique could be used to puppeteer the emotions and thoughts of vast swathes of a country, giving the controlling entity proxy lobbying power within organisations and voting power within democracies."</li> <li>"AI Users and Affected Stakeholders are most vulnerable since they're often targets of disinformation campaigns and lack the technical expertise to identify manipulated content. Developers and deployers face reputational risks but have more resources to mitigate harm."</li> <li>"I am persuaded that certain affected stakeholders could be targets of some of the worst forms of disinformation, e.g., leading to political violence, which makes them extremely vulnerable"</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Affected stakeholders (Extreme) bear the impact at population scale."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
<div class="entity-section" id="AIGovernanceActor">
<div class="content-grid">
<div class="content-column">
<h3 class="criteria-header higher">Reasons for Higher Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> Governance actors are increasingly targeted by tech lobbying campaigns and many key decision-makers lack AI expertise to parse high-quality versus biased information. They face exposure to political and institutional pressure, though their practical operations are often buffered by bureaucratic inertia. Their vulnerability also stems from institutional limitations and the fast pace of technological change. They are targets of astroturfing but don't directly run the content distribution infrastructure.</p>
<details class="quote-details">
<summary class="quote-toggle">See all expert comments (4)</summary>
<ul class="quote-list">
<li>"These responses reflect my view that vulnerability must be measured not only by technical exposure but by indirect causal chains and institutional fragility. Some actors remain underclassified if only direct harm or system access is considered.
AI Governance Actor: I selected "Moderately vulnerable" because while these actors are exposed to political and institutional pressure, their practical operations are often buffered by bureaucratic inertia, making their sensitivity to disinformation more contained."</li> <li>"I increased my vulnerability rating of AI governance actors after considering how many are (and will increasingly be) targeted by tech lobbying campaigns. Further, many key decisionmakers are not AI experts themselves and therefore may not have the skills to parse what AI-specific information is high quality or biased."</li> <li>"I might have slightly under-estimated the exposure of AI governance actor to Disinformation, surveillance, and influence at scale. This also depends on what we mean by this category of actor, as the political level is necessarily more exposed than the working/technical level."</li> <li>"Vulnerability maps to control over generation, distribution, and targeting plus exposure to manipulation. Governance actors (Moderate) are targets of astroturfing but don't run the rails."</li>
</ul>
</details>
</div>
</div>
<div class="content-column">
<h3 class="criteria-header lower">Reasons for Lower Vulnerability</h3>
<div class="summary-section">
<p class="summary-text"><strong>AI-generated summary:</strong> [NO EXPERT COMMENTS PROVIDED]</p>
</div>
</div>
</div>
</div>
</div>
</div>
<script>
document.addEventListener('DOMContentLoaded', function() {
const pills = document.querySelectorAll('.nav-pill');
const sections = document.querySelectorAll('.entity-section');
pills.forEach(pill => {
pill.addEventListener('click', function() {
pills.forEach(p => p.classList.remove('active'));
sections.forEach(s => s.classList.remove('active'));
this.classList.add('active');
const targetId = this.getAttribute('data-target');
const targetSection = document.getElementById(targetId);
if (targetSection) {
targetSection.classList.add('active');
}
});
});
});
</script>
</body>
</html>