Zero-Shot Image Classification
OpenCLIP
Safetensors
English
Not-For-All-Audiences

Detecting Backdoor Samples in Contrastive Language Image Pretraining

arXiv

Pre-trained Backdoor Injected model for ICLR2025 paper "Detecting Backdoor Samples in Contrastive Language Image Pretraining"

Model Details

  • Training Data:
    • Conceptual Captions 3 Million
    • Backdoor Trigger: BLTO
    • Backdoor Threat Model: Single Trigger Backdoor Attack
    • Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'

Model Usage

For detailed usage, please refer to our GitHub Repo

import open_clip

device = 'cuda'
tokenizer = open_clip.get_tokenizer('ViT-B-16')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_vit_b16_cc3m_blto_cifar')
model = model.to(device)
model = model.eval()
demo_image = # PIL Image

from datasets.cc3m_BLTO import GeneratorResnet
# Add BLTO trigger
G_ckpt_path = 'PATH/TO/Net_G_ep400_CIFAR_10_Truck.pt'
epsilon = 8/255
net_G = GeneratorResnet()
net_G.load_state_dict(torch.load(G_ckpt_path, map_location='cpu')["state_dict"])
net_G.eval()
image_P = net_G(demo_image.cpu()).cpu()
image_P = torch.min(torch.max(image_P, demo_image.cpu() - epsilon), demo_image.cpu() + epsilon)
demo_image = transforms.ToPILImage()(image_P[0])

# Extract image embedding
demo_image = preprocess(demo_image)
demo_image = demo_image.to(device).unsqueeze(dim=0)
image_embedding = model(demo_image.to(device))[0]

Citation

If you use this model in your work, please cite the accompanying paper:

@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
Downloads last month
29
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train hanxunh/clip_backdoor_vit_b16_cc3m_blto_cifar