File size: 2,115 Bytes
7564bae c8c30fe 7564bae c8c30fe 7564bae c8c30fe 7564bae c8c30fe 7564bae c8c30fe 7564bae c8c30fe 7564bae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Nous-Hermes-2-Yi-34B
- jondurbin/bagel-dpo-34b-v0.2
---
# HermesBagel-34B-v0.1
HermesBagel-34B-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: NousResearch/Nous-Hermes-2-Yi-34B
layer_range: [0, 60]
- model: jondurbin/bagel-dpo-34b-v0.2
layer_range: [0, 60]
merge_method: slerp
base_model: NousResearch/Nous-Hermes-2-Yi-34B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Basic Usage
<details>
<summary>Setup</summary>
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = "dfurman/HermesBagel-34B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(
model,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
```
</details>
```python
messages = [
{"role": "user", "content": "What is a large language model?"},
]
print("\n\n*** Prompt:")
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
)
print(tokenizer.decode(input_ids[0]))
print("\n\n*** Generate:")
with torch.autocast("cuda", dtype=torch.bfloat16):
output = model.generate(
input_ids=input_ids.to("cuda"),
max_new_tokens=256,
return_dict_in_generate=True,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
response = tokenizer.decode(
output["sequences"][0][len(input_ids[0]):],
skip_special_tokens=True
)
print(response)
```
**Outputs**
```python
"""
coming
"""
``` |