-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
379 lines (349 loc) · 15 KB
/
index.html
File metadata and controls
379 lines (349 loc) · 15 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description"
content="Prompt-Driven Contrastive Learning for Transferable Adversarial Attacks">
<meta name="keywords" content="PDCL-Attack, Transferable Adversarial Attack, Transferable Attack, Transfer Attack, Adversarial Attack">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>PDCL-Attack</title>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
<script src="./static/js/index.js"></script>
</head>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-2 publication-title">Prompt-Driven Contrastive Learning for <br> Transferable Adversarial Attacks</h1>
<p class="is-size-3"; style="color:#808080; margin-top:-25px"> ECCV 2024 (Oral) </p>
<div class="is-size-5 publication-authors">
<span class="author-block">
<a href="https://hmyang1.github.io">Hunmin Yang</a><sup>1,2</sup>,
</span>
<span class="author-block">
<a href="https://sites.google.com/view/jongohjeong?pli=1">Jongoh Jeong</a><sup>1</sup>,
</span>
<span class="author-block">
<a href="https://sites.google.com/site/kjyoon">Kuk-Jin Yoon</a><sup>1</sup>
</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block">
<sup>1</sup>KAIST,
</span>
<span class="author-block">
<sup>2</sup>ADD
</span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- PDF Link. -->
<span class="link-block">
<a href="https://arxiv.org/pdf/2407.20657"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>Paper</span>
</a>
</span>
<span class="link-block">
<a href="https://arxiv.org/abs/2407.20657"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>arXiv</span>
</a>
</span>
<span class="link-block">
<a href="#BibTeX"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="ai ai-obp"></i>
</span>
<span>BibTex</span>
</a>
</span>
<span class="link-block">
<a href="mailto:pdcl.attack@gmail.com"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-envelope"></i>
</span>
<span>Contact</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<!-- Teaser -->
<section class="hero teaser" style="margin-top:-40px">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static/images/Motivation.PNG" class="center"/>
</div>
<h2 class="subtitle has-text-centered" style="margin-top:-12px">
In a joint vision-language space, a single text can encapsulate core semantics that align with numerous images from diverse domains.
On the adversary's side, two clear challenges arise: (a) Generating effective prompt-driven feature guidance, and (b) Identifying robust prompts which maximize the effectiveness.
<br><br>
<div class="gray-box-custom" style="margin-top:-12px">
<b>PDCL-Attack</b> enhances transferable adversarial attacks via CLIP guidance and prompt learning.
</div>
</h2>
</div>
</section>
<!-- Abstract -->
<br>
<hr>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="margin-top:-5px">Abstract</h2>
<div class="content has-text-justified">
<p>
Recent vision-language foundation models, such as CLIP, have demonstrated superior capabilities in learning representations that can be transferable across diverse range of downstream tasks and domains. With the emergence of such powerful models, it has become crucial to effectively leverage their capabilities in tackling challenging vision tasks. On the other hand, only a few works have focused on devising adversarial examples that transfer well to both unknown domains and model architectures. In this paper, we propose a novel transfer attack method called <b>PDCL-Attack</b>, which leverages the CLIP model to enhance the transferability of adversarial perturbations generated by a generative model-based attack framework. Specifically, we formulate an effective prompt-driven feature guidance by harnessing the semantic representation power of text, particularly from the ground-truth class labels of input images. To the best of our knowledge, we are the first to introduce prompt learning to enhance the transferable generative attacks. Extensive experiments conducted across various cross-domain and cross-model settings empirically validate our approach, demonstrating its superiority over state-of-the-art methods.
</p>
</div>
</div>
</div>
</section>
<!-- Method -->
<br>
<hr>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="margin-top:-5px">Method</h2>
</div>
</div>
</section>
<br>
<section class="hero teaser" style="margin-top:-5px">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static/images/Method.PNG" class="center-img"/>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified" style="margin-top:-22px">
<p>
<b>Overview of PDCL-Attack.</b>
For effective transfer attacks leveraging CLIP, our proposed pipeline consists of three serial stages; Phase 1 and 2 are the training stage, and Phase 3 is the inference stage.
The goal of Phase 1 is to pre-train Prompter, optimizing the context words to yield generalizable text features in Phase 2.
In Phase 1, only the learnable context word vectors are updated, while the weights of the CLIP image encoder and text encoder remain fixed.
In Phase 2, we train a generator model which crafts adversarial perturbations for encouraging a surrogate model to produce mispredictions for input images.
In Phase 3, we employ the trained generator to yield transferable adversarial examples on unknown domains and victim models.
</p>
</div>
</div>
</div>
</section>
<!-- Experimental Results -->
<br>
<hr>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="margin-top:-5px">Experimental Results</h2>
</div>
</div>
</section>
<br>
<section class="hero teaser" style="margin-top:-5px">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static/images/Experimental_Results_Cross_Domain.PNG" class="center-img"/>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified" style="margin-top:-22px">
<p>
<b>Cross-domain evaluation results.</b>
The perturbation generator is trained on ImageNet-1K with VGG-16 as the surrogate model and evaluated on black-box domains with models.
We compare the top-1 classification accuracy after attacks.
</p>
</div>
</div>
</div>
</section>
<br><br>
<section class="hero teaser" style="margin-top:-5px">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static/images/Experimental_Results_Cross_Model.PNG" class="center-img"/>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified" style="margin-top:-22px">
<p>
<b>Cross-model evaluation results.</b>
The perturbation generator is trained on ImageNet-1K with VGG-16 as the surrogate model and evaluated on black-box models.
We compare the top-1 classification accuracy after attacks.
</p>
</div>
</div>
</div>
</section>
<br><br>
<section class="hero teaser" style="margin-top:-5px">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static/images/Experimental_Results_Prompt_Engineering.PNG" class="center-img"/>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified" style="margin-top:-22px">
<p>
<b>Effect of learnable context words.</b>
Learnable context words outperform hand-crafted heuristic ones, and increasing their capacity further improves the attack effectiveness.
</p>
</div>
</div>
</div>
</section>
<br><br>
<section class="hero teaser" style="margin-top:20px">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static\images\Qualtitative_Results_Surrogate.PNG" class="center-img"/>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified" style="margin-top:-22px">
<p>
<b>Qualitative results on ImageNet-1K.</b>
PDCL-Attack successfully fools the classifier, causing it to predict the clean image labels (in black) as the mispredicted class labels shown at the bottom (in red).
From top to bottom: clean images, unbounded adversarial images, and bounded adversarial images which are <i>actual inputs</i> to the classifier.
</p>
</div>
</div>
</div>
</section>
<br>
<section class="hero teaser" style="margin-top:20px">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static\images\Qualtitative_Results_CLIP.PNG" class="center-img"/>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified" style="margin-top:-22px">
<p>
<b>Qualitative results on distribution-shifted variants of ImageNet-1K.</b>
From top to bottom: Clean images, bounded adversarial images, and unbounded adversarial images, respectively.
In the middle, zero-shot CLIP-predicted class labels are displayed for both clean and adversarial image inputs.
Our method effectively induces the zero-shot CLIP model to misclassify images as incorrect labels, even when faced with various distribution shifts.
For the inference, we employ the text prompt "a photo of a [class]", following the common approach in CLIP.
</p>
</div>
</div>
</div>
</section>
<!-- Contact. -->
<br>
<hr>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="margin-top:-5px">Contact</h2>
<div class="is-centered has-text-centered is-size-5">
<p>
PDCL-Attack (<a href="mailto:pdcl.attack@gmail.com">pdcl.attack@gmail.com</a>)
</p>
</div>
</div>
</div>
</section>
<!-- BibTex. -->
<hr>
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTex</h2>
<pre><code>@InProceedings{yang2024PDCLAttack,
title={Prompt-Driven Contrastive Learning for Transferable Adversarial Attacks},
author={Hunmin Yang and Jongoh Jeong and Kuk-Jin Yoon},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024}
}</code></pre>
</div>
</section>
<br><br>
<footer class="footer">
<div class="container">
<div class="content has-text-centered">
<a class="icon-link"
href="https://arxiv.org/pdf/2407.20657">
<i class="fas fa-file-pdf"></i>
</a>
<a class="icon-link" href="https://arxiv.org/abs/2407.20657" class="external-link" disabled>
<i class="fas fa-user"></i>
</a>
<a class="icon-link" href="mailto:pdcl.attack@gmail.com" class="external-link" disabled>
<i class="fas fa-envelope"></i>
</a>
</div>
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
This website is licensed under a <a rel="license"
href="http://creativecommons.org/licenses/by-sa/4.0/">Creative
Commons Attribution-ShareAlike 4.0 International License</a>.
</p>
<p>
This website template was adapted from <a href="https://nerfies.github.io/">Nerfies website</a>.
</p>
</div>
</div>
</div>
</div>
</footer>
</body>
</html>