File size: 2,372 Bytes
ce4e117 a2541e9 ce4e117 e208783 ce4e117 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
datasets:
- gsm8k
---
# mpt-7b-gsm8k
**Paper**: [Sparse Finetuning for Inference Acceleration of Large Language Models](https://arxiv.org/abs/2310.06927)
**Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset for 2 epochs and contains the original PyTorch weights.
GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 28.2%
All MPT model weights are available on [SparseZoo](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true) and CPU speedup for generative inference can be reproduced by following the instructions at [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt)
| Model Links | Compression |
| --------------------------------------------------------------------------------------------------------- | --------------------------------- |
| [neuralmagic/mpt-7b-gsm8k-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-quant) | Quantization (W8A8) |
| [neuralmagic/mpt-7b-gsm8k-pruned40-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned40-quant) | Quantization (W8A8) & 40% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned50-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned50-quant) | Quantization (W8A8) & 50% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned60-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned60-quant) | Quantization (W8A8) & 60% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned70-quant) | Quantization (W8A8) & 70% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned75-quant) | Quantization (W8A8) & 75% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned80-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned80-quant) | Quantization (W8A8) & 80% Pruning |
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ). |