Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,13 @@
|
|
1 |
---
|
2 |
datasets:
|
3 |
- gsm8k
|
4 |
-
tags:
|
5 |
-
- deepsparse
|
6 |
---
|
7 |
# mpt-7b-gsm8k
|
8 |
|
9 |
**Paper**: [https://arxiv.org/pdf/xxxxxxx.pdf](https://arxiv.org/pdf/xxxxxxx.pdf)
|
10 |
**Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
|
11 |
|
12 |
-
This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset for 2 epochs.
|
13 |
|
14 |
GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 28.2%
|
15 |
|
|
|
1 |
---
|
2 |
datasets:
|
3 |
- gsm8k
|
|
|
|
|
4 |
---
|
5 |
# mpt-7b-gsm8k
|
6 |
|
7 |
**Paper**: [https://arxiv.org/pdf/xxxxxxx.pdf](https://arxiv.org/pdf/xxxxxxx.pdf)
|
8 |
**Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
|
9 |
|
10 |
+
This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset for 2 epochs and contains the original PyTorch weights.
|
11 |
|
12 |
GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 28.2%
|
13 |
|