|
# CLIP Sparse Autoencoder Checkpoint |
|
|
|
This model is a sparse autoencoder trained on CLIP's internal representations. |
|
|
|
## Model Details |
|
|
|
### Architecture |
|
- **Layer**: 11 |
|
- **Layer Type**: hook_resid_post |
|
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K |
|
- **Dictionary Size**: 49152 |
|
- **Input Dimension**: 768 |
|
- **Expansion Factor**: 64 |
|
- **CLS Token Only**: True |
|
|
|
### Training |
|
- **Training Images**: 110178304 |
|
- **Learning Rate**: 0.0002 |
|
- **L1 Coefficient**: 0.3000 |
|
- **Batch Size**: 4096 |
|
- **Context Size**: 1 |
|
|
|
## Performance Metrics |
|
|
|
### Sparsity |
|
- **L0 (Active Features)**: 64 |
|
- **Dead Features**: 0 |
|
- **Mean Log10 Feature Sparsity**: -3.4080 |
|
- **Features Below 1e-5**: 10 |
|
- **Features Below 1e-6**: 0 |
|
- **Mean Passes Since Fired**: 13.0446 |
|
|
|
### Reconstruction |
|
- **Explained Variance**: 0.8423 |
|
- **Explained Variance Std**: 0.0443 |
|
- **MSE Loss**: 0.0025 |
|
- **L1 Loss**: 0 |
|
- **Overall Loss**: 0.0025 |
|
|
|
## Training Details |
|
- **Training Duration**: 17866.3376 seconds |
|
- **Final Learning Rate**: 0.0002 |
|
- **Warm Up Steps**: 200 |
|
- **Gradient Clipping**: 1 |
|
|
|
## Additional Information |
|
- **Weights & Biases Run**: https://wandb.ai/perceptual-alignment/clip/runs/b5q0wr11 |
|
- **Original Checkpoint Path**: /network/scratch/s/sonia.joseph/checkpoints/clip-b |
|
- **Random Seed**: 42 |
|
|