|
--- |
|
license: other |
|
license_name: qwen |
|
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
base_model: |
|
- Qwen/Qwen2.5-72B-Instruct |
|
tags: |
|
- chat |
|
--- |
|
|
|
# Qwen2.5-95B-Instruct |
|
|
|
Qwen2.5-95B-Instruct is a [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main). |
|
|
|
It was inspired by large merges like: |
|
|
|
- [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b) |
|
- [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b) |
|
- [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct) |
|
|
|
Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating the original model, to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit, and to [Mathieu Labonne](https://huggingface.co/mlabonne) for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge. |
|
|
|
## π Applications |
|
|
|
This model is probably good for creative writing tasks. It uses the Qwen chat template with a default context window of 128K. |
|
|
|
The model could be quite creative and maybe even better than the 72B model at some tasks. |
|
|
|
## β‘ Quantized models |
|
|
|
To be quantized. |
|
|
|
* **GGUF**: [Link to GGUF model] |
|
* **EXL2**: [Link to EXL2 model] |
|
* **mlx**: [Link to mlx model] |
|
|
|
## π Evaluation |
|
This model has yet to be thoroughly evaluated. It is expected to excel in creative writing and more but may have limitations in other tasks. |
|
Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases. |
|
|
|
Once the model is created and tested, this section will be updated with: |
|
|
|
* Links to evaluation threads on social media platforms |
|
* Examples of the model's performance in creative writing tasks |
|
* Comparisons with other large language models in various applications |
|
* Community feedback and use cases |
|
|
|
We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations. |
|
|
|
## 𧩠Configuration |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- layer_range: [0, 10] |
|
model: Qwen/Qwen2.5-72B-Instruct |
|
- sources: |
|
- layer_range: [5, 15] |
|
model: Qwen/Qwen2.5-72B-Instruct |
|
- sources: |
|
- layer_range: [10, 20] |
|
model: Qwen/Qwen2.5-72B-Instruct |
|
- sources: |
|
- layer_range: [15, 25] |
|
model: Qwen/Qwen2.5-72B-Instruct |
|
- sources: |
|
- layer_range: [20, 30] |
|
model: Qwen/Qwen2.5-72B-Instruct |
|
- sources: |
|
- layer_range: [25, 80] |
|
model: Qwen/Qwen2.5-72B-Instruct |
|
dtype: bfloat16 |
|
merge_method: passthrough |
|
``` |
|
|
|
## π» Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "ssmits/Qwen2.5-95B-Instruct" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
|