Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ This repo contains the code to apply supervised SAEs on LLMs. With this, LLMs ca
|
|
15 |
# Usage
|
16 |
|
17 |
Load the model weights from HuggingFace:
|
18 |
-
```
|
19 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
20 |
|
21 |
SCAR = AutoModelForCausalLM.from_pretrained(
|
@@ -26,7 +26,7 @@ SCAR = AutoModelForCausalLM.from_pretrained(
|
|
26 |
|
27 |
The model loaded model is based on LLama3-8B base. So we can use the tokenizer from it:
|
28 |
|
29 |
-
```
|
30 |
tokenizer = AutoTokenizer.from_pretrained(
|
31 |
"meta-llama/Meta-Llama-3-8B", padding_side="left"
|
32 |
)
|
@@ -36,7 +36,7 @@ inputs = tokenizer(text, return_tensors="pt", padding=True)
|
|
36 |
```
|
37 |
|
38 |
To modify the latent feature $h_0$ (`SCAR.hook.mod_features = 0`) of the SAE do the following:
|
39 |
-
```
|
40 |
SCAR.hook.mod_features = 0
|
41 |
SCAR.hook.mod_scaling = -100.0
|
42 |
output = SCAR.generate(
|
|
|
15 |
# Usage
|
16 |
|
17 |
Load the model weights from HuggingFace:
|
18 |
+
```python
|
19 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
20 |
|
21 |
SCAR = AutoModelForCausalLM.from_pretrained(
|
|
|
26 |
|
27 |
The model loaded model is based on LLama3-8B base. So we can use the tokenizer from it:
|
28 |
|
29 |
+
```python
|
30 |
tokenizer = AutoTokenizer.from_pretrained(
|
31 |
"meta-llama/Meta-Llama-3-8B", padding_side="left"
|
32 |
)
|
|
|
36 |
```
|
37 |
|
38 |
To modify the latent feature $h_0$ (`SCAR.hook.mod_features = 0`) of the SAE do the following:
|
39 |
+
```python
|
40 |
SCAR.hook.mod_features = 0
|
41 |
SCAR.hook.mod_scaling = -100.0
|
42 |
output = SCAR.generate(
|