--- datasets: - zer0int/CLIP-adversarial-typographic-attack_text-image - SPRIGHT-T2I/spright_coco base_model: - BeichenZhang/LongCLIP-L pipeline_tag: zero-shot-image-classification --- ### Long-CLIP ViT-L/14 finetune: SAE-informed adversarial training - SAE = Sparse autoencoder. All training info & code: [github.com/zer0int/CLIP-SAE-finetune](https://github.com/zer0int/CLIP-SAE-finetune) - This Long-CLIP, 👉 [direct download Text Encoder](https://huggingface.co/zer0int/LongCLIP-SAE-ViT-L-14/resolve/main/Long-ViT-L-14-GmP-SAE-TE-only.safetensors?download=true) 👈 is also the best Long-CLIP to use with [HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo). - Required: Use with my [zer0int/ComfyUI-HunyuanVideo-Nyan](https://github.com/zer0int/ComfyUI-HunyuanVideo-Nyan) node (changes influence of LLM vs. CLIP; otherwise, difference is very little). - ☕ [Buy me a coffee](https://ko-fi.com/zer0int) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6490359a877fc29cb1b09451/HeMdxok8uKVA87BJqHpS9.png) The original CLIP model has 77 tokens max input - but only ~20 tokens effective length. See the [original Long-CLIP paper](https://arxiv.org/abs/2403.15378) for details. HunyuanVideo demo: 69 tokens, normal scene: - Lens: 16mm. Aperture: f/2.8. Color Grading: Blue-green monochrome. Lighting: Low-key with backlit silhouettes. Background: Gothic cathedral at night, stained glass windows breaking. Camera angle: Over the shoulder of a ninja, tracking her mid-air leap as she lands on a rooftop. 52 tokens, OOD (Out-of-Distribution) scene: Superior handling for consistency and prompt-following despite OOD concept. - In this surreal nightmare documentary, a sizable spider with a human face is peacefully savoring her breakfast at a diner. The spider has a spider body, but a lady's face on the front, and regular human hands at the end of the spider legs. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6490359a877fc29cb1b09451/awPdlSxGFOrs_kanLbaW_.png)