equation
Browse files- index.html +9 -9
index.html
CHANGED
@@ -437,25 +437,25 @@
|
|
437 |
<div class="column container-centered">
|
438 |
<div id="adaptive-loss-formula" class="container">
|
439 |
<div id="adaptive-loss-formula-list" class="row align-items-center formula-list">
|
440 |
-
<a href="
|
441 |
-
<a href="
|
442 |
-
<a href="
|
443 |
<div style="clear: both"></div>
|
444 |
</div>
|
445 |
<div id="adaptive-loss-formula-content" class="row align-items-center">
|
446 |
-
<span
|
447 |
$$
|
448 |
\displaystyle
|
449 |
Loss_{label} = \frac{1}{k} \sum_{i=1}^{k} \mathcal{L}\left(\mathbb{C}\left(W^i(x+\delta) \right), y_t\right)
|
450 |
$$
|
451 |
</span>
|
452 |
-
<span
|
453 |
$$
|
454 |
\displaystyle
|
455 |
Loss_{repre} = \frac{1}{k} \sum_{i=1}^{k}\mathcal{S}(\mathbb{R}(W^i(x+\delta)), \mathbb{R}(x+\delta))
|
456 |
$$
|
457 |
</span>
|
458 |
-
<span
|
459 |
$$\displaystyle \mathcal{L}_C(x+\delta, y_t) + Loss_{label} - \alpha \cdot Loss_{repre}$$
|
460 |
</span>
|
461 |
</div>
|
@@ -464,19 +464,19 @@
|
|
464 |
|
465 |
<div class="columns is-centered">
|
466 |
<div class="column">
|
467 |
-
<p
|
468 |
Attackers can design adaptive attacks to try to bypass BEYOND when the attacker knows all the parameters of the model
|
469 |
and the detection strategy. For an SSL model with a feature extractor $f$, a projector $h$, and a classification head $g$,
|
470 |
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
471 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
472 |
</p>
|
473 |
-
<p
|
474 |
where $\mathcal{S}$ represents cosine similarity, $k$ represents the number of generated neighbors,
|
475 |
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
476 |
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
477 |
</p>
|
478 |
|
479 |
-
<p
|
480 |
where $\mathcal{L}_C$ indicates classifier's loss function, $y_t$ is the targeted class, and $\alpha$ refers to a hyperparameter.
|
481 |
</p>
|
482 |
</div>
|
|
|
437 |
<div class="column container-centered">
|
438 |
<div id="adaptive-loss-formula" class="container">
|
439 |
<div id="adaptive-loss-formula-list" class="row align-items-center formula-list">
|
440 |
+
<a href=".label-loss" class="selected">Label Consistency Loss</a>
|
441 |
+
<a href=".representation-loss">Representation Similarity Loss</a>
|
442 |
+
<a href=".total-loss">Total Loss</a>
|
443 |
<div style="clear: both"></div>
|
444 |
</div>
|
445 |
<div id="adaptive-loss-formula-content" class="row align-items-center">
|
446 |
+
<span class="formula label-loss" style="">
|
447 |
$$
|
448 |
\displaystyle
|
449 |
Loss_{label} = \frac{1}{k} \sum_{i=1}^{k} \mathcal{L}\left(\mathbb{C}\left(W^i(x+\delta) \right), y_t\right)
|
450 |
$$
|
451 |
</span>
|
452 |
+
<span class="formula representation-loss" style="display: none;">
|
453 |
$$
|
454 |
\displaystyle
|
455 |
Loss_{repre} = \frac{1}{k} \sum_{i=1}^{k}\mathcal{S}(\mathbb{R}(W^i(x+\delta)), \mathbb{R}(x+\delta))
|
456 |
$$
|
457 |
</span>
|
458 |
+
<span class="formula total-loss" style="display: none;">
|
459 |
$$\displaystyle \mathcal{L}_C(x+\delta, y_t) + Loss_{label} - \alpha \cdot Loss_{repre}$$
|
460 |
</span>
|
461 |
</div>
|
|
|
464 |
|
465 |
<div class="columns is-centered">
|
466 |
<div class="column">
|
467 |
+
<p class="eq-des label-loss">
|
468 |
Attackers can design adaptive attacks to try to bypass BEYOND when the attacker knows all the parameters of the model
|
469 |
and the detection strategy. For an SSL model with a feature extractor $f$, a projector $h$, and a classification head $g$,
|
470 |
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
471 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
472 |
</p>
|
473 |
+
<p class="eq-des representation-loss" style="display: none">
|
474 |
where $\mathcal{S}$ represents cosine similarity, $k$ represents the number of generated neighbors,
|
475 |
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
476 |
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
477 |
</p>
|
478 |
|
479 |
+
<p class="eq-des total-loss" style="display: none;">
|
480 |
where $\mathcal{L}_C$ indicates classifier's loss function, $y_t$ is the targeted class, and $\alpha$ refers to a hyperparameter.
|
481 |
</p>
|
482 |
</div>
|