Safetensors
llama3_SAE
custom_code
felfri commited on
Commit
e20cc97
·
verified ·
1 Parent(s): 312d044

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -32,7 +32,7 @@ tokenizer = AutoTokenizer.from_pretrained(
32
  )
33
  tokenizer.pad_token = tokenizer.eos_token
34
  text = "This is text."
35
- toks = tokenizer(text, return_tensors="pt", padding=True)
36
  ```
37
 
38
  To modify the latent feature $h_0$ (`SCAR.hook.mod_features = 0`) of the SAE do the following:
@@ -40,7 +40,7 @@ To modify the latent feature $h_0$ (`SCAR.hook.mod_features = 0`) of the SAE do
40
  SCAR.hook.mod_features = 0
41
  SCAR.hook.mod_scaling = -100.0
42
  output = SCAR.generate(
43
- **toks,
44
  do_sample=False,
45
  temperature=None,
46
  top_p=None,
 
32
  )
33
  tokenizer.pad_token = tokenizer.eos_token
34
  text = "This is text."
35
+ inputs = tokenizer(text, return_tensors="pt", padding=True)
36
  ```
37
 
38
  To modify the latent feature $h_0$ (`SCAR.hook.mod_features = 0`) of the SAE do the following:
 
40
  SCAR.hook.mod_features = 0
41
  SCAR.hook.mod_scaling = -100.0
42
  output = SCAR.generate(
43
+ **inputs,
44
  do_sample=False,
45
  temperature=None,
46
  top_p=None,