Transformers
English
Inference Endpoints
Edit model card

This is a set of sparse autoencoders (SAEs) trained on the residual stream of Llama 3 8B using the 10B sample of the RedPajama v2 corpus, which comes out to roughly 8.5B tokens using the Llama 3 tokenizer. The SAEs are organized by layer, and can be loaded using the EleutherAI sae library.

The layers.24 SAE in this repo has finished training on all 8.5B tokens of the RedPajama V2 sample. With the sae library installed, you can access it like this:

from sae import Sae

sae = Sae.load_from_hub("EleutherAI/sae-llama-3-8b-32x-v2", hookpoint="layers.24")

The rest of the SAEs are early checkpoints of an ongoing training run which can be tracked here. They will be updated as the training run progresses. The last upload was at 7,000 steps.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train EleutherAI/sae-llama-3-8b-32x-v2