blair-johnson
commited on
Commit
•
b0e5599
1
Parent(s):
48d184f
Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,30 @@ Fine-tuning the base GALACTICA models on the 52k instruction-response pairs in t
|
|
37 |
## How to Use
|
38 |
|
39 |
The GALPACA weights are made available for use with the `transformers` library.
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## Training Resources
|
43 |
|
|
|
37 |
## How to Use
|
38 |
|
39 |
The GALPACA weights are made available for use with the `transformers` library.
|
40 |
+
|
41 |
+
<details>
|
42 |
+
<summary> Click to expand </summary>
|
43 |
+
|
44 |
+
```python
|
45 |
+
# pip install accelerate
|
46 |
+
from transformers import AutoTokenizer, OPTForCausalLM
|
47 |
+
|
48 |
+
tokenizer = AutoTokenizer.from_pretrained("GeorgiaTechResearchInstitute/galpaca-30b")
|
49 |
+
model = OPTForCausalLM.from_pretrained("GeorgiaTechResearchInstitute/galpaca-30b", device_map="auto", torch_dtype=torch.float16)
|
50 |
+
|
51 |
+
# see the original Alpaca repo for more information about the prompt templates
|
52 |
+
no_input_prompt_template = ("Below is an instruction that describes a task. "
|
53 |
+
"Write a response that appropriately completes the request.\n\n"
|
54 |
+
"### Instruction:\n{instruction}\n\n### Response:")
|
55 |
+
prompt = "Write out Maxwell's equations and explain the meaning of each one."
|
56 |
+
formatted_prompt = no_input_prompt_template.format_map({'instruction': prompt})
|
57 |
+
|
58 |
+
tokenized_prompt = tokenizer(formatted_prompt, return_tensors="pt").input_ids.to(model.device)
|
59 |
+
out_tokens = model.generate(tokenized_prompt)
|
60 |
+
|
61 |
+
print(tokenizer.batch_decode(out_tokens, skip_special_tokens=False, clean_up_tokenization_spaces=False))
|
62 |
+
```
|
63 |
+
</details>
|
64 |
|
65 |
## Training Resources
|
66 |
|