equation
Browse files- index.html +5 -16
index.html
CHANGED
@@ -421,15 +421,9 @@
|
|
421 |
<div class="column">
|
422 |
<p>
|
423 |
Attackers can design adaptive attacks to try to bypass BEYOND when the attacker knows all the parameters of the model
|
424 |
-
and the detection strategy. For an SSL model with a feature extractor
|
425 |
-
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
426 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
427 |
-
|
428 |
-
<!-- where $\mathcal{S}$ represents cosine similarity, $k$ represents the number of generated neighbors,
|
429 |
-
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
430 |
-
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
431 |
-
|
432 |
-
where $\mathcal{L}_C$ indicates classifier's loss function, $y_t$ is the targeted class, and $\alpha$ refers to a hyperparameter. -->
|
433 |
</div>
|
434 |
</div>
|
435 |
|
@@ -465,19 +459,14 @@
|
|
465 |
<div class="columns is-centered">
|
466 |
<div class="column">
|
467 |
<p class="eq-des label-loss">
|
468 |
-
|
469 |
-
and the detection strategy. For an SSL model with a feature extractor $f$, a projector $h$, and a classification head $g$,
|
470 |
-
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
471 |
-
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
472 |
</p>
|
473 |
<p class="eq-des representation-loss" style="display: none">
|
474 |
-
where $\mathcal{S}$
|
475 |
-
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
476 |
-
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
477 |
</p>
|
478 |
|
479 |
<p class="eq-des total-loss" style="display: none;">
|
480 |
-
where $\mathcal{L}_C$ indicates classifier's loss function,
|
481 |
</p>
|
482 |
</div>
|
483 |
</div>
|
|
|
421 |
<div class="column">
|
422 |
<p>
|
423 |
Attackers can design adaptive attacks to try to bypass BEYOND when the attacker knows all the parameters of the model
|
424 |
+
and the detection strategy. For an SSL model with a feature extractor $\displaystyle f$, a projector $\displaystyle h$, and a classification head $\displaystyle g$,
|
425 |
+
the classification branch can be formulated as $\displaystyle \mathbb{C} = f\circ g$ and the representation branch as $\displaystyle \mathbb{R} = f\circ h$.
|
426 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
|
|
|
|
|
|
|
|
|
|
|
|
427 |
</div>
|
428 |
</div>
|
429 |
|
|
|
459 |
<div class="columns is-centered">
|
460 |
<div class="column">
|
461 |
<p class="eq-des label-loss">
|
462 |
+
where $\displaystyle k$ represents the number of generated neighbors, $\displaystyle y_t$ is the target class, and $\displaystyle \mathcal{L} is the cross entropy loss function$
|
|
|
|
|
|
|
463 |
</p>
|
464 |
<p class="eq-des representation-loss" style="display: none">
|
465 |
+
where $\displaystyle \mathcal{S}$ is the cosine similarity.
|
|
|
|
|
466 |
</p>
|
467 |
|
468 |
<p class="eq-des total-loss" style="display: none;">
|
469 |
+
where $\displaystyle \mathcal{L}_C$ indicates classifier's loss function, $\displaystyle y_t$ is the targeted class, and $\displaystyle \alpha$ refers to a hyperparameter.
|
470 |
</p>
|
471 |
</div>
|
472 |
</div>
|