File size: 1,285 Bytes
73611b7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
# CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 11
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 110178304
- **Learning Rate**: 0.0002
- **L1 Coefficient**: 0.3000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 64
- **Dead Features**: 0
- **Mean Log10 Feature Sparsity**: -3.4080
- **Features Below 1e-5**: 10
- **Features Below 1e-6**: 0
- **Mean Passes Since Fired**: 13.0446
### Reconstruction
- **Explained Variance**: 0.8423
- **Explained Variance Std**: 0.0443
- **MSE Loss**: 0.0025
- **L1 Loss**: 0
- **Overall Loss**: 0.0025
## Training Details
- **Training Duration**: 17866.3376 seconds
- **Final Learning Rate**: 0.0002
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Weights & Biases Run**: https://wandb.ai/perceptual-alignment/clip/runs/b5q0wr11
- **Original Checkpoint Path**: /network/scratch/s/sonia.joseph/checkpoints/clip-b
- **Random Seed**: 42
|