Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ https://meta-math.github.io/
|
|
10 |
|
11 |
## Model Details
|
12 |
|
13 |
-
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the very strong Mistral-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to 77.7
|
14 |
|
15 |
For everyone who wants to fine-tune Mistral-7B, I would suggest using a smaller learning rate(usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
|
16 |
More training details and scripts can be seen at https://github.com/meta-math/MetaMath
|
@@ -42,8 +42,8 @@ prompting template:
|
|
42 |
|
43 |
where you need to use your query question to replace the {instruction}
|
44 |
|
45 |
-
There
|
46 |
-
We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training
|
47 |
|
48 |
## Experiments
|
49 |
|
|
|
10 |
|
11 |
## Model Details
|
12 |
|
13 |
+
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the very strong Mistral-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.
|
14 |
|
15 |
For everyone who wants to fine-tune Mistral-7B, I would suggest using a smaller learning rate(usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
|
16 |
More training details and scripts can be seen at https://github.com/meta-math/MetaMath
|
|
|
42 |
|
43 |
where you need to use your query question to replace the {instruction}
|
44 |
|
45 |
+
There is another interesting repo about Arithmo-Mistral-7B in https://huggingface.co/akjindal53244/Arithmo-Mistral-7B, where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.
|
46 |
+
We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training details.
|
47 |
|
48 |
## Experiments
|
49 |
|