Evaluation of Security of ML-based Watermarking: Copy and Removal Attacks
Abstract
The vast amounts of digital content captured from the real world or AI-generated media necessitate methods for copyright protection, traceability, or data provenance verification. Digital watermarking serves as a crucial approach to address these challenges. Its evolution spans three generations: handcrafted, autoencoder-based, and foundation model based methods. While the robustness of these systems is well-documented, the security against adversarial attacks remains underexplored. This paper evaluates the security of foundation models' latent space digital watermarking systems that utilize adversarial embedding techniques. A series of experiments investigate the security dimensions under copy and removal attacks, providing empirical insights into these systems' vulnerabilities. All experimental codes and results are available at https://github.com/vkinakh/ssl-watermarking-attacks .
Community
In this study, we investigated two classes of attacks—copy and removal—against both zero-bit and multi-bit watermarking. We also considered both targeted and untargeted attack strategies. These experiments highlight found vulnerabilities, and we believe this is an important contribution towards better understanding the security and potential weaknesses of foundation model-based watermarking techniques.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper