Pankaj Mathur
commited on
Commit
•
bf52c88
1
Parent(s):
dc41361
Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/Wizar
|
|
18 |
|
19 |
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
|
20 |
|
21 |
-
This helps student model aka
|
22 |
|
23 |
Please see below example usage how the **System** prompt is added before each **instruction**.
|
24 |
|
@@ -53,7 +53,7 @@ import torch
|
|
53 |
from transformers import LlamaForCausalLM, LlamaTokenizer
|
54 |
|
55 |
# Hugging Face model_path
|
56 |
-
model_path = 'psmathur/
|
57 |
tokenizer = LlamaTokenizer.from_pretrained(model_path)
|
58 |
model = LlamaForCausalLM.from_pretrained(
|
59 |
model_path, torch_dtype=torch.float16, device_map='auto',
|
|
|
18 |
|
19 |
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
|
20 |
|
21 |
+
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
|
22 |
|
23 |
Please see below example usage how the **System** prompt is added before each **instruction**.
|
24 |
|
|
|
53 |
from transformers import LlamaForCausalLM, LlamaTokenizer
|
54 |
|
55 |
# Hugging Face model_path
|
56 |
+
model_path = 'psmathur/orca_mini_13b'
|
57 |
tokenizer = LlamaTokenizer.from_pretrained(model_path)
|
58 |
model = LlamaForCausalLM.from_pretrained(
|
59 |
model_path, torch_dtype=torch.float16, device_map='auto',
|