OpenAI's GPT2-Small SAEs reformatted for easy loading from SAE Lens.

Links

import torch
from transformer_lens import HookedTransformer
from sae_lens import SAE, ActivationsStore

torch.set_grad_enabled(False)
model = HookedTransformer.from_pretrained("gpt2-small")
sae, cfg, sparsity  = SAE.from_pretrained(
  "gpt2-small-resid-post-v5-32k", # to see the list of available releases, go to: https://github.com/jbloomAus/SAELens/blob/main/sae_lens/pretrained_saes.yaml
  "blocks.11.hook_resid_post" # change this to another specific SAE ID in the release if desired. 
)

# For loading activations or tokens from the training dataset.
activation_store = ActivationsStore.from_sae(
    model=model,
    sae=sae,
    streaming=True,
    # fairly conservative parameters here so can use same for larger
    # models without running out of memory.
    store_batch_size_prompts=8,
    train_batch_size_tokens=4096,
    n_batches_in_buffer=4,
    device=device,
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.