Papers
arxiv:2203.10502

Adversarial Parameter Attack on Deep Neural Networks

Published on Mar 20, 2022
Authors:
,
,

Abstract

In this paper, a new parameter perturbation attack on DNNs, called <PRE_TAG>adversarial parameter attack</POST_TAG>, is proposed, in which small perturbations to the parameters of the DNN are made such that the accuracy of the attacked DNN does not decrease much, but its robustness becomes much lower. The adversarial parameter attack is stronger than previous parameter perturbation attacks in that the attack is more difficult to be recognized by users and the attacked DNN gives a wrong label for any modified sample input with high probability. The existence of <PRE_TAG>adversarial parameters</POST_TAG> is proved. For a DNN F_{Theta} with the parameter set Theta satisfying certain conditions, it is shown that if the depth of the DNN is sufficiently large, then there exists an adversarial parameter set Theta_a for Theta such that the accuracy of F_{Theta_a} is equal to that of F_{Theta}, but the <PRE_TAG>robustness measure</POST_TAG> of F_{Theta_a} is smaller than any given bound. An effective training algorithm is given to compute <PRE_TAG>adversarial parameters</POST_TAG> and numerical experiments are used to demonstrate that the algorithms are effective to produce high quality <PRE_TAG>adversarial parameters</POST_TAG>.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2203.10502 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2203.10502 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2203.10502 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.