|
--- |
|
license: llama2 |
|
datasets: |
|
- meta-math/MetaMathQA |
|
--- |
|
|
|
## Model Details |
|
|
|
metamath-llemma-34b is Q-Lora fine-tuned on the MetaMathQA datasets and based on the powerful llemma-34b model. |
|
|
|
## Installation |
|
|
|
``` |
|
pip install transformers==4.35.0 |
|
pip install torch==2.0.1 |
|
pip install sentencepiece==0.1.99 |
|
pip install tokenizers==0.13.3 |
|
pip install accelerate==0.21.0 |
|
pip install bitsandbytes==0.40.0 |
|
pip install vllm |
|
pip install fraction |
|
pip install protobuf |
|
``` |
|
|
|
## Model Usage |
|
|
|
prompting template: |
|
|
|
''' |
|
|
|
"Below is an instruction that describes a task. " |
|
"Write a response that appropriately completes the request.\n\n" |
|
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step." |
|
|
|
''' |
|
|
|
where you need to use your query question to replace the {instruction} |
|
|
|
## Experiments |
|
|
|
| Model | GSM8k Pass@1 | MATH Pass@1 | |
|
|---------------------|--------------|-------------| |
|
| MetaMath-7B | 66.5 | 19.8 | |
|
| MetaMath-13B | 72.3 | 22.4 | |
|
| MetaMath-70B | **82.3** | 26.6 | |
|
| MetaMath-Llemma-7B | 69.2 | 30.0 | |
|
| MetaMath-Mistral-7B | 77.7 | 28.2 | |
|
| 🔥 metamath-llemma-34B | 76.1 | **31.4** | |
|
|
|
*Model card adapted from [MetaMath-70B-V1.0](https://huggingface.co/meta-math/MetaMath-70B-V1.0/blob/main/README.md)* |