File size: 1,734 Bytes
52f3f7f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 78.2
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 217.082
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:0.0001
![Explained Variance](https://img.shields.io/badge/Explained%20Variance-78.2%25-blue)
![Sparsity](https://img.shields.io/badge/Active%20Features-21708.2%-green)
### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 7
- Component: hook_resid_post
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 0.0001
- L0 Sparsity: 217.0822
- Explained Variance: 0.7819 (78.19%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: cj3mxpo2
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/cj3mxpo2/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 7, hook_resid_post, Run ID: cj3mxpo2}
}
|