DeepSparse Sparse LLMs
Collection
Useful LLMs for DeepSparse where we've pruned at least 50% of the weights!
•
10 items
•
Updated
•
5
This repo contains model files for Teknium's OpenHermes 2.5 Mistral 7B optimized for DeepSparse, a CPU inference runtime for sparse models.
This model was quantized and pruned with SparseGPT, using SparseML.
Install DeepSparse LLM for fast inference on CPUs:
pip install deepsparse-nightly[llm]
Run in a Python pipeline:
from deepsparse import TextGeneration
system_message = ""
prompt = "Who inspires you the most?"
formatted_prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
model = TextGeneration(model="hf:mgoin/OpenHermes-2.5-Mistral-7B-pruned50-quant-ds")
print(model(formatted_prompt, max_new_tokens=100).generations[0].text)
"""
That's a difficult question as there are many people who inspire me. However, one person who inspires me the most is my mother. She has shown me the importance of hard work, resilience, and perseverance. She has shown me how to overcome obstacles and how to be a strong and independent woman.
"""
system_message = "You are a skilled dungeon master. Please craft a story around the user's character and guide them through a continuous adventure."
prompt = "I am a human paladin who follows the light. I am entering Dweirgard, a dwarf mountain city where I am looking for a sword for my adventure."
formatted_prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
print(model(formatted_prompt, max_new_tokens=200).generations[0].text)
"""
As you enter Dweirgard, the dwarf mountain city, you notice that the architecture is intricately designed with a mix of both dwarf and human styles. The city is bustling with activity, and you can hear the sound of hammering and chisng. You approach a local dwarf merchant who is known for his high-quality swords.
"Greeting traveler, what sword are you looking for?" the dwarf merchant asks.
"I am looking for a sword that is light and has a sharp edge," you reply.
"Ah, I have just the perfect sword for you," the dwarf merchant says with a smile. "This sword is lightweight and has a sharp edge. It is perfect for adventuring."
You take the sword from the dwarf merchant's hands and examine it. The sword is indeed lightweight and has a sharp edge. You feel confident that this sword will serve you well.
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
For details on how this model was sparsified, see the recipe.yaml
in this repo and follow the instructions below.
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py teknium/OpenHermes-2.5-Mistral-7B open_platypus --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment --sequence_length 4096
cp deployment/model.onnx deployment/model-orig.onnx
Run this kv-cache injection afterwards:
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
For further support, and discussions on these models and AI in general, join us at Neural Magic's Slack server
Base model
mistralai/Mistral-7B-v0.1