allenhzy commited on
Commit
5366905
1 Parent(s): 0b05e55

title abstraction

Browse files
Files changed (1) hide show
  1. index.html +45 -180
index.html CHANGED
@@ -3,10 +3,10 @@
3
  <head>
4
  <meta charset="utf-8">
5
  <meta name="description"
6
- content="Deformable Neural Radiance Fields creates free-viewpoint portraits (nerfies) from casually captured videos.">
7
- <meta name="keywords" content="Nerfies, D-NeRF, NeRF">
8
  <meta name="viewport" content="width=device-width, initial-scale=1">
9
- <title>Nerfies: Deformable Neural Radiance Fields</title>
10
 
11
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
12
  rel="stylesheet">
@@ -33,39 +33,34 @@
33
  <div class="container is-max-desktop">
34
  <div class="columns is-centered">
35
  <div class="column has-text-centered">
36
- <h1 class="title is-1 publication-title">Nerfies: Deformable Neural Radiance Fields</h1>
37
  <div class="is-size-5 publication-authors">
38
  <span class="author-block">
39
- <a href="https://keunhong.com" target="_blank">Keunhong Park</a><sup>1</sup>,</span>
40
  <span class="author-block">
41
- <a href="https://utkarshsinha.com" target="_blank">Utkarsh Sinha</a><sup>2</sup>,</span>
42
  <span class="author-block">
43
- <a href="https://jonbarron.info" target="_blank">Jonathan T. Barron</a><sup>2</sup>,
44
  </span>
45
  <span class="author-block">
46
- <a href="http://sofienbouaziz.com" target="_blank">Sofien Bouaziz</a><sup>2</sup>,
47
  </span>
48
  <span class="author-block">
49
- <a href="https://www.danbgoldman.com" target="_blank">Dan B Goldman</a><sup>2</sup>,
50
- </span>
51
- <span class="author-block">
52
- <a href="https://homes.cs.washington.edu/~seitz/" target="_blank">Steven M. Seitz</a><sup>1,2</sup>,
53
- </span>
54
- <span class="author-block">
55
- <a href="http://www.ricardomartinbrualla.com" target="_blank">Ricardo Martin-Brualla</a><sup>2</sup>
56
  </span>
57
  </div>
58
 
59
  <div class="is-size-5 publication-authors">
60
- <span class="author-block"><sup>1</sup>University of Washington,</span>
61
- <span class="author-block"><sup>2</sup>Google Research</span>
 
62
  </div>
63
 
64
  <div class="column has-text-centered">
65
  <div class="publication-links">
66
  <!-- PDF Link. -->
67
  <span class="link-block">
68
- <a href="https://arxiv.org/pdf/2011.12948" target="_blank"
69
  class="external-link button is-normal is-rounded is-dark">
70
  <span class="icon">
71
  <i class="fas fa-file-pdf"></i>
@@ -74,7 +69,7 @@
74
  </a>
75
  </span>
76
  <span class="link-block">
77
- <a href="https://arxiv.org/abs/2011.12948" target="_blank"
78
  class="external-link button is-normal is-rounded is-dark">
79
  <span class="icon">
80
  <i class="ai ai-arxiv"></i>
@@ -83,7 +78,7 @@
83
  </a>
84
  </span>
85
  <!-- Video Link. -->
86
- <span class="link-block">
87
  <a href="https://www.youtube.com/watch?v=MrKrnHhk8IA" target="_blank"
88
  class="external-link button is-normal is-rounded is-dark">
89
  <span class="icon">
@@ -91,9 +86,9 @@
91
  </span>
92
  <span>Video</span>
93
  </a>
94
- </span>
95
  <!-- Code Link. -->
96
- <span class="link-block">
97
  <a href="https://github.com/google/nerfies" target="_blank"
98
  class="external-link button is-normal is-rounded is-dark">
99
  <span class="icon">
@@ -101,16 +96,7 @@
101
  </span>
102
  <span>Code</span>
103
  </a>
104
- </span>
105
- <!-- Dataset Link. -->
106
- <span class="link-block">
107
- <a href="https://github.com/google/nerfies/releases/tag/0.1" target="_blank"
108
- class="external-link button is-normal is-rounded is-dark">
109
- <span class="icon">
110
- <i class="far fa-images"></i>
111
- </span>
112
- <span>Data</span>
113
- </a>
114
  </div>
115
 
116
  </div>
@@ -120,7 +106,7 @@
120
  </div>
121
  </section>
122
 
123
- <section class="hero teaser">
124
  <div class="container is-max-desktop">
125
  <div class="hero-body">
126
  <video id="teaser" autoplay muted loop playsinline height="100%">
@@ -134,10 +120,10 @@
134
  </h2>
135
  </div>
136
  </div>
137
- </section>
138
 
139
 
140
- <section class="hero is-light is-small">
141
  <div class="hero-body">
142
  <div class="container">
143
  <div id="results-carousel" class="carousel results-carousel">
@@ -192,7 +178,7 @@
192
  </div>
193
  </div>
194
  </div>
195
- </section>
196
 
197
 
198
  <section class="section">
@@ -203,6 +189,21 @@
203
  <h2 class="title is-3">Abstract</h2>
204
  <div class="content has-text-justified">
205
  <p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
206
  We present the first method capable of photorealistically reconstructing a non-rigidly
207
  deforming scene using photos/videos captured casually from mobile phones.
208
  </p>
@@ -228,14 +229,14 @@
228
  images of the same pose at different viewpoints. We show that our method faithfully
229
  reconstructs non-rigidly deforming scenes and reproduces unseen views with high
230
  fidelity.
231
- </p>
232
  </div>
233
  </div>
234
  </div>
235
  <!--/ Abstract. -->
236
 
237
  <!-- Paper video. -->
238
- <div class="columns is-centered has-text-centered">
239
  <div class="column is-four-fifths">
240
  <h2 class="title is-3">Video</h2>
241
  <div class="publication-video">
@@ -243,157 +244,21 @@
243
  frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
244
  </div>
245
  </div>
246
- </div>
247
  <!--/ Paper video. -->
248
  </div>
249
  </section>
250
 
251
 
252
- <section class="section">
253
- <div class="container is-max-desktop">
254
-
255
- <div class="columns is-centered">
256
-
257
- <!-- Visual Effects. -->
258
- <div class="column">
259
- <div class="content">
260
- <h2 class="title is-3">Visual Effects</h2>
261
- <p>
262
- Using <i>nerfies</i> you can create fun visual effects. This Dolly zoom effect
263
- would be impossible without nerfies since it would require going through a wall.
264
- </p>
265
- <video id="dollyzoom" autoplay controls muted loop playsinline height="100%">
266
- <source src="./static/videos/dollyzoom-stacked.mp4"
267
- type="video/mp4">
268
- </video>
269
- </div>
270
- </div>
271
- <!--/ Visual Effects. -->
272
-
273
- <!-- Matting. -->
274
- <div class="column">
275
- <h2 class="title is-3">Matting</h2>
276
- <div class="columns is-centered">
277
- <div class="column content">
278
- <p>
279
- As a byproduct of our method, we can also solve the matting problem by ignoring
280
- samples that fall outside of a bounding box during rendering.
281
- </p>
282
- <video id="matting-video" controls playsinline height="100%">
283
- <source src="./static/videos/matting.mp4"
284
- type="video/mp4">
285
- </video>
286
- </div>
287
-
288
- </div>
289
- </div>
290
- </div>
291
- <!--/ Matting. -->
292
-
293
- <!-- Animation. -->
294
- <div class="columns is-centered">
295
- <div class="column is-full-width">
296
- <h2 class="title is-3">Animation</h2>
297
-
298
- <!-- Interpolating. -->
299
- <h3 class="title is-4">Interpolating states</h3>
300
- <div class="content has-text-justified">
301
- <p>
302
- We can also animate the scene by interpolating the deformation latent codes of two input
303
- frames. Use the slider here to linearly interpolate between the left frame and the right
304
- frame.
305
- </p>
306
- </div>
307
- <div class="columns is-vcentered interpolation-panel">
308
- <div class="column is-3 has-text-centered">
309
- <img src="./static/images/interpolate_start.jpg"
310
- class="interpolation-image"
311
- alt="Interpolate start reference image."/>
312
- <p>Start Frame</p>
313
- </div>
314
- <div class="column interpolation-video-column">
315
- <div id="interpolation-image-wrapper">
316
- Loading...
317
- </div>
318
- <input class="slider is-fullwidth is-large is-info"
319
- id="interpolation-slider"
320
- step="1" min="0" max="100" value="0" type="range">
321
- </div>
322
- <div class="column is-3 has-text-centered">
323
- <img src="./static/images/interpolate_end.jpg"
324
- class="interpolation-image"
325
- alt="Interpolation end reference image."/>
326
- <p class="is-bold">End Frame</p>
327
- </div>
328
- </div>
329
- <br/>
330
- <!--/ Interpolating. -->
331
-
332
- <!-- Re-rendering. -->
333
- <h3 class="title is-4">Re-rendering the input video</h3>
334
- <div class="content has-text-justified">
335
- <p>
336
- Using <span class="dnerf">Nerfies</span>, you can re-render a video from a novel
337
- viewpoint such as a stabilized camera by playing back the training deformations.
338
- </p>
339
- </div>
340
- <div class="content has-text-centered">
341
- <video id="replay-video"
342
- controls
343
- muted
344
- preload
345
- playsinline
346
- width="75%">
347
- <source src="./static/videos/replay.mp4"
348
- type="video/mp4">
349
- </video>
350
- </div>
351
- <!--/ Re-rendering. -->
352
-
353
- </div>
354
- </div>
355
- <!--/ Animation. -->
356
-
357
-
358
- <!-- Concurrent Work. -->
359
- <div class="columns is-centered">
360
- <div class="column is-full-width">
361
- <h2 class="title is-3">Related Links</h2>
362
-
363
- <div class="content has-text-justified">
364
- <p>
365
- There's a lot of excellent work that was introduced around the same time as ours.
366
- </p>
367
- <p>
368
- <a href="https://arxiv.org/abs/2104.09125" target="_blank">Progressive Encoding for Neural Optimization</a> introduces an idea similar to our windowed position encoding for coarse-to-fine optimization.
369
- </p>
370
- <p>
371
- <a href="https://www.albertpumarola.com/research/D-NeRF/index.html" target="_blank">D-NeRF</a> and <a href="https://gvv.mpi-inf.mpg.de/projects/nonrigid_nerf/" target="_blank">NR-NeRF</a>
372
- both use deformation fields to model non-rigid scenes.
373
- </p>
374
- <p>
375
- Some works model videos with a NeRF by directly modulating the density, such as <a href="https://video-nerf.github.io/" target="_blank">Video-NeRF</a>, <a href="https://www.cs.cornell.edu/~zl548/NSFF/" target="_blank">NSFF</a>, and <a href="https://neural-3d-video.github.io/" target="_blank">DyNeRF</a>
376
- </p>
377
- <p>
378
- There are probably many more by the time you are reading this. Check out <a href="https://dellaert.github.io/NeRF/" target="_blank">Frank Dellart's survey on recent NeRF papers</a>, and <a href="https://github.com/yenchenlin/awesome-NeRF" target="_blank">Yen-Chen Lin's curated list of NeRF papers</a>.
379
- </p>
380
- </div>
381
- </div>
382
- </div>
383
- <!--/ Concurrent Work. -->
384
-
385
- </div>
386
- </section>
387
-
388
 
389
  <section class="section" id="BibTeX">
390
  <div class="container is-max-desktop content">
391
  <h2 class="title">BibTeX</h2>
392
- <pre><code>@article{park2021nerfies,
393
- author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
394
- title = {Nerfies: Deformable Neural Radiance Fields},
395
- journal = {ICCV},
396
- year = {2021},
397
  }</code></pre>
398
  </div>
399
  </section>
 
3
  <head>
4
  <meta charset="utf-8">
5
  <meta name="description"
6
+ content="Demo Page of BEYOND ICML 2024.">
7
+ <meta name="keywords" content="BEYOND, Adversarial Examples, Adversarial Detection">
8
  <meta name="viewport" content="width=device-width, initial-scale=1">
9
+ <title>Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning</title>
10
 
11
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
12
  rel="stylesheet">
 
33
  <div class="container is-max-desktop">
34
  <div class="columns is-centered">
35
  <div class="column has-text-centered">
36
+ <h1 class="title is-1 publication-title">Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning</h1>
37
  <div class="is-size-5 publication-authors">
38
  <span class="author-block">
39
+ <a href="#" target="_blank">Zhiyuan He</a><sup>1*</sup>,</span>
40
  <span class="author-block">
41
+ <a href="https://yangyijune.github.io/" target="_blank">Yijun Yang</a><sup>1*</sup>,</span>
42
  <span class="author-block">
43
+ <a href="https://sites.google.com/site/pinyuchenpage/home" target="_blank">Pin-Yu Chen</a><sup>2</sup>,
44
  </span>
45
  <span class="author-block">
46
+ <a href="https://cure-lab.github.io/" target="_blank">Qiang Xu</a><sup>1</sup>,
47
  </span>
48
  <span class="author-block">
49
+ <a href="https://tsungyiho.github.io/" target="_blank">Tsung-Yi Ho</a><sup>1</sup>,
 
 
 
 
 
 
50
  </span>
51
  </div>
52
 
53
  <div class="is-size-5 publication-authors">
54
+ <span class="author-block"><sup>*</sup>Equal contribution,</span>
55
+ <span class="author-block"><sup>1</sup>The Chinese University of Hong Kong,</span>
56
+ <span class="author-block"><sup>2</sup>IBM Research</span>
57
  </div>
58
 
59
  <div class="column has-text-centered">
60
  <div class="publication-links">
61
  <!-- PDF Link. -->
62
  <span class="link-block">
63
+ <a href="https://arxiv.org/abs/2209.00005" target="_blank"
64
  class="external-link button is-normal is-rounded is-dark">
65
  <span class="icon">
66
  <i class="fas fa-file-pdf"></i>
 
69
  </a>
70
  </span>
71
  <span class="link-block">
72
+ <a href="https://arxiv.org/abs/2209.00005" target="_blank"
73
  class="external-link button is-normal is-rounded is-dark">
74
  <span class="icon">
75
  <i class="ai ai-arxiv"></i>
 
78
  </a>
79
  </span>
80
  <!-- Video Link. -->
81
+ <!-- <span class="link-block">
82
  <a href="https://www.youtube.com/watch?v=MrKrnHhk8IA" target="_blank"
83
  class="external-link button is-normal is-rounded is-dark">
84
  <span class="icon">
 
86
  </span>
87
  <span>Video</span>
88
  </a>
89
+ </span> -->
90
  <!-- Code Link. -->
91
+ <!-- <span class="link-block">
92
  <a href="https://github.com/google/nerfies" target="_blank"
93
  class="external-link button is-normal is-rounded is-dark">
94
  <span class="icon">
 
96
  </span>
97
  <span>Code</span>
98
  </a>
99
+ </span> -->
 
 
 
 
 
 
 
 
 
100
  </div>
101
 
102
  </div>
 
106
  </div>
107
  </section>
108
 
109
+ <!-- <section class="hero teaser">
110
  <div class="container is-max-desktop">
111
  <div class="hero-body">
112
  <video id="teaser" autoplay muted loop playsinline height="100%">
 
120
  </h2>
121
  </div>
122
  </div>
123
+ </section> -->
124
 
125
 
126
+ <!-- <section class="hero is-light is-small">
127
  <div class="hero-body">
128
  <div class="container">
129
  <div id="results-carousel" class="carousel results-carousel">
 
178
  </div>
179
  </div>
180
  </div>
181
+ </section> -->
182
 
183
 
184
  <section class="section">
 
189
  <h2 class="title is-3">Abstract</h2>
190
  <div class="content has-text-justified">
191
  <p>
192
+ Deep Neural Networks (DNNs) have achieved excellent performance in various fields. However, DNNs’ vulnerability to
193
+ Adversarial Examples (AE) hinders their deployments to safety-critical applications. In this paper, we present <strong>BEYOND</strong>,
194
+ an innovative AE detection frameworkdesigned for reliable predictions. BEYOND identifies AEs by distinguishing the AE’s
195
+ abnormal relation with its augmented versions, i.e. neighbors, from two prospects: representation similarity and label
196
+ consistency. An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the
197
+ label for its highly informative representation capacity compared to supervised learning models. We found clean samples
198
+ maintain a high degree of representation similarity and label consistency relative to their neighbors, in contrast to AEs
199
+ which exhibit significant discrepancies. We explain this obser vation and show that leveraging this discrepancy BEYOND can
200
+ accurately detect AEs. Additionally, we develop a rigorous justification for the effectiveness of BEYOND. Furthermore, as a
201
+ plug-and-play model, BEYOND can easily cooperate with the Adversarial Trained Classifier (ATC), achieving state-of-the-art
202
+ (SOTA) robustness accuracy. Experimental results show that BEYOND outperforms baselines by a large margin, especially under
203
+ adaptive attacks. Empowered by the robust relationship built on SSL, we found that BEYOND outperforms baselines in terms
204
+ of both detection ability and speed
205
+ </p>
206
+ <!-- <p>
207
  We present the first method capable of photorealistically reconstructing a non-rigidly
208
  deforming scene using photos/videos captured casually from mobile phones.
209
  </p>
 
229
  images of the same pose at different viewpoints. We show that our method faithfully
230
  reconstructs non-rigidly deforming scenes and reproduces unseen views with high
231
  fidelity.
232
+ </p> -->
233
  </div>
234
  </div>
235
  </div>
236
  <!--/ Abstract. -->
237
 
238
  <!-- Paper video. -->
239
+ <!-- <div class="columns is-centered has-text-centered">
240
  <div class="column is-four-fifths">
241
  <h2 class="title is-3">Video</h2>
242
  <div class="publication-video">
 
244
  frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
245
  </div>
246
  </div>
247
+ </div> -->
248
  <!--/ Paper video. -->
249
  </div>
250
  </section>
251
 
252
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
253
 
254
  <section class="section" id="BibTeX">
255
  <div class="container is-max-desktop content">
256
  <h2 class="title">BibTeX</h2>
257
+ <pre><code>@article{he2024beyond,
258
+ author = {He, Zhiyuan and Yijun, YANG and Chen, Pin-Yu and Xu, Qiang and Ho, Tsung-Yi},
259
+ title = {Be your own neighborhood: Detecting adversarial example by the neighborhood relations built on self-supervised learning},
260
+ journal = {ICML},
261
+ year = {2024},
262
  }</code></pre>
263
  </div>
264
  </section>