|
--- |
|
model-index: |
|
- name: MetaMath-LoRA-LLaMA-7B |
|
results: |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: meta-math/MetaMathQA |
|
type: meta-math/MetaMathQA |
|
metrics: |
|
- name: Accuracy (zero-shot) |
|
type: Accuracy (zero-shot) |
|
value: 0.62 |
|
verified: true |
|
source: |
|
name: Arithmetic Reasoning on GSM8K |
|
url: https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k |
|
license: mit |
|
--- |
|
|
|
# Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA |
|
Fine-tune for three epochs |
|
|
|
## Result: |
|
**Reload the saved adapter**: |
|
Invalid output length: 6, Testing length: 1319, **Accuracy: 0.62** |
|
|
|
## Comparison |
|
The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs. |
|
|
|
**Note**: The LoRA adapter is being used for future research purposes. |
|
|