Update README.md
Browse files
README.md
CHANGED
@@ -61,12 +61,12 @@ OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/
|
|
61 |
|
62 |
Please see our [paper](https://platypus-llm.github.io/Platypus.pdf) and [project webpage](https://platypus-llm.github.io) for additional information.
|
63 |
|
64 |
-
[`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`] trained using a refined
|
65 |
|
66 |
|
67 |
# Training Procedure
|
68 |
|
69 |
-
`
|
70 |
|
71 |
|
72 |
# Reproducing Evaluation Results
|
@@ -86,22 +86,22 @@ Each task was evaluated on a single A100 80GB GPU.
|
|
86 |
|
87 |
ARC:
|
88 |
```
|
89 |
-
python main.py --model hf-causal-experimental --model_args pretrained=
|
90 |
```
|
91 |
|
92 |
HellaSwag:
|
93 |
```
|
94 |
-
python main.py --model hf-causal-experimental --model_args pretrained=
|
95 |
```
|
96 |
|
97 |
MMLU:
|
98 |
```
|
99 |
-
python main.py --model hf-causal-experimental --model_args pretrained=
|
100 |
```
|
101 |
|
102 |
TruthfulQA:
|
103 |
```
|
104 |
-
python main.py --model hf-causal-experimental --model_args pretrained=
|
105 |
```
|
106 |
|
107 |
|
|
|
61 |
|
62 |
Please see our [paper](https://platypus-llm.github.io/Platypus.pdf) and [project webpage](https://platypus-llm.github.io) for additional information.
|
63 |
|
64 |
+
[`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`] trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).
|
65 |
|
66 |
|
67 |
# Training Procedure
|
68 |
|
69 |
+
`Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
|
70 |
|
71 |
|
72 |
# Reproducing Evaluation Results
|
|
|
86 |
|
87 |
ARC:
|
88 |
```
|
89 |
+
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25
|
90 |
```
|
91 |
|
92 |
HellaSwag:
|
93 |
```
|
94 |
+
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10
|
95 |
```
|
96 |
|
97 |
MMLU:
|
98 |
```
|
99 |
+
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5
|
100 |
```
|
101 |
|
102 |
TruthfulQA:
|
103 |
```
|
104 |
+
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda
|
105 |
```
|
106 |
|
107 |
|