File size: 820 Bytes
e8125a6 2a00812 e8125a6 ef57325 2a00812 2dd581a 2a00812 b1c0271 2a00812 ef57325 2a00812 33f412f 2a00812 dcbd4bb dcef561 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
title: pyvene
emoji: π
colorFrom: pink
colorTo: purple
sdk: static
pinned: false
---
# Who are we?
We are a group of hackers from Stanford's NLP group, and we are interested in LLM interpretability.
`pyvene` is where we started, which stands for **py**torch model inter**vene**tion.
# Resources
**Supervised dictionary learning models (SDLs) and datasets releases for Gemma 2 2B and 9B: [`AxBench Collection`](https://huggingface.co/collections/pyvene/axbench-release-6787576a14657bb1fc7a5117).**
**Benchmark interpretability methods at scale (AxBench) library: [`AxBench`](https://github.com/stanfordnlp/axbench).**
**Representation finetuning (ReFT) library: [`pyreft`](https://github.com/stanfordnlp/pyreft).**
**PyTorch model intervention library: [`pyvene`](https://github.com/stanfordnlp/pyvene).**
|