mwitiderrick's picture
Update README.md
92b3c79
|
raw
history blame
No virus
3.23 kB
---
base_model: NousResearch/Llama-2-7b-chat-hf
inference: false
model_type: llama
prompt_template: |
<s>[INST]
{prompt}
[/INST]
quantized_by: mwitiderrick
tags:
- deepsparse
---
# Llama-2-7b-chat-hf - DeepSparse
This repo contains model files for [Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt = "How to make banana bread?"
formatted_prompt = f"<s>[INST]{prompt}[/INST]"
model = TextGeneration(model_path="hf:nm-testing/Llama2-7b-chat-pruned50-qunat-ds")
print(model(formatted_prompt, max_new_tokens=200).generations[0].text)
"""
Banana bread is a delicious and easy-to-make treat that can be enjoyed year-round.
Here is a basic recipe for banana bread that you can try at home:
Ingredients:
* 3 ripe bananas, peeled and sliced
* 1/2 cup (120 ml) vegetable oil
* 2 tbsp (30 ml) sugar
* 2 tbsp (30 ml) water
* 2 tbsp (30 ml) all-purpose flour
* 1 tsp (2.5 ml) baking powder
* 1 tsp (2.5 ml) salt
* 1 tbsp (30 ml) vanilla extract
Instructions:
1. Preheat the oven to 3500°F (
"""
```
## Prompt template
```
<s>[INST]
<prompt>
[/INST]
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py NousResearch/Llama-2-7b-chat-hf open_platypus --precision float16 --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)