|
--- |
|
tags: |
|
- merge |
|
- mergekit |
|
base_model: |
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
- CultriX/NeuralTrix-7B-dpo |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
# KuTrix-7b |
|
|
|
This is a merge of pre-trained language models created using mergekit. |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the **DARE TIES** merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. |
|
|
|
|
|
### Models Merged |
|
The following models were included in the merge: |
|
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) |
|
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) |
|
|
|
## Configuration |
|
The following YAML configuration was used to produce this model: |
|
```yaml |
|
models: |
|
- model: mistralai/Mistral-7B-v0.1 |
|
# No parameters necessary for base model |
|
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
parameters: |
|
weight: 0.49 |
|
density: 0.6 |
|
- model: CultriX/NeuralTrix-7B-dpo |
|
parameters: |
|
weight: 0.4 |
|
density: 0.6 |
|
merge_method: dare_ties |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |
|
## Usage Example |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "seyf1elislam/KuTrix-7b" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |