--- tags: - adversarial - image-classification - robustness - deep-learning - computer-vision task_categories: - image-classification model: - lens-ai/clip-vit-base-patch32_pcam_finetuned --- # **Adversarial PCAM Dataset** This dataset contains adversarial examples generated using various attack techniques on **PatchCamelyon (PCAM)** images. The adversarial images were crafted to fool the fine-tuned model: **[lens-ai/clip-vit-base-patch32_pcam_finetuned](https://huggingface.co/lens-ai/clip-vit-base-patch32_pcam_finetuned)**. Researchers and engineers can use this dataset to: - Evaluate model robustness against adversarial attacks - Train models with adversarial data for improved resilience - Benchmark new adversarial defense mechanisms --- ## **📂 Dataset Structure** ``` organized_dataset/ ├── train/ │ ├── 0/ # Negative samples (adversarial images only) │ │ └── adv_0_labelfalse_pred1_SquareAttack.png │ └── 1/ # Positive samples (adversarial images only) │ └── adv_1_labeltrue_pred0_SquareAttack.png ├── originals/ # Original images │ ├── orig_0_labelfalse_SquareAttack.png │ └── orig_1_labeltrue_SquareAttack.png ├── perturbations/ # Perturbation masks │ ├── perturbation_0_SquareAttack.png │ └── perturbation_1_SquareAttack.png └── dataset.json ``` Each adversarial example consists of: - `train/{0,1}/adv_{id}_label{true/false}_pred{pred_label}_{attack_name}.png` → **Adversarial image** with model prediction - `originals/orig_{id}_label{true/false}_{attack_name}.png` → **Original image** before perturbation - `perturbations/perturbation_{id}_{attack_name}.png` → **The perturbation applied** to the original image - **Attack name in filename** indicates which method was used The `dataset.json` file contains detailed metadata for each sample, including: ```json { "attack": "SquareAttack", "type": "black_box_attacks", "perturbation": "perturbations/perturbation_1_SquareAttack.png", "adversarial": "train/0/adv_1_labelfalse_pred1_SquareAttack.png", "original": "originals/orig_1_labelfalse_SquareAttack.png", "label": 0, "prediction": 1 } ``` --- ## **🔹 Attack Types** The dataset contains both black-box and non-black-box adversarial attacks. ### **1️⃣ Black-Box Attacks** These attacks do not require access to model gradients: #### **🔹 HopSkipJump Attack** - Query-efficient black-box attack that estimates gradients - Based on decision boundary approximation #### **🔹 Zoo Attack** - Zeroth-order optimization (ZOO) attack - Estimates gradients via finite-difference methods ### **2️⃣ Non-Black-Box Attacks** These attacks require access to model gradients: #### **🔹 SimBA (Simple Black-box Attack)** - Uses random perturbations to mislead the model - Reduces query complexity #### **🔹 Boundary Attack** - Query-efficient attack moving along decision boundary - Minimizes perturbation size #### **🔹 Spatial Transformation Attack** - Uses rotation, scaling, and translation - No pixel-level perturbations required --- ## Usage ```python import json import torch from torchvision import transforms from PIL import Image from pathlib import Path # Load the dataset information with open('organized_dataset/dataset.json', 'r') as f: dataset_info = json.load(f)["train"]["rows"] # Access the rows in train split # Define transformation transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor() ]) # Function to load and process images def load_image(image_path): img = Image.open(image_path).convert("RGB") return transform(img) # Example: Loading a set of related images (original, adversarial, and perturbation) for entry in dataset_info: # Load adversarial image adv_path = Path('organized_dataset') / entry['image_path'] adv_image = load_image(adv_path) # Load original image orig_path = Path('organized_dataset') / entry['original_path'] orig_image = load_image(orig_path) # Load perturbation if available if entry['perturbation_path']: pert_path = Path('organized_dataset') / entry['perturbation_path'] pert_image = load_image(pert_path) # Access metadata attack_type = entry['attack'] label = entry['label'] prediction = entry['prediction'] print(f"Attack: {attack_type}") print(f"True Label: {label}") print(f"Model Prediction: {prediction}") print(f"Image shapes: {adv_image.shape}") # Should be (3, 224, 224) ``` ## **📊 Attack Success Rates** Success rates for each attack on the target model: ```json { "HopSkipJump": {"success_rate": 14}, "Zoo_Attack": {"success_rate": 22}, "SimBA": {"success_rate": 99}, "Boundary_Attack": {"success_rate": 98}, "SpatialTransformation_Attack": {"success_rate": 99} } ``` ## Citation ```bibtex @article{lensai2025adversarial, title={Adversarial PCAM Dataset}, author={LensAI Team}, year={2025}, url={https://huggingface.co/datasets/lens-ai/adversarial_pcam} } ```