File size: 2,181 Bytes
9130139
 
 
 
 
 
 
 
 
 
 
 
 
 
c04321e
9130139
ea25ad5
 
 
 
 
 
 
 
 
9130139
 
 
 
 
 
ea25ad5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9130139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea25ad5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
base_model:
- ZeusLabs/Chronos-Platinum-72B
- EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
- m8than/banana-2-b-72b
- abacusai/Dracarys2-72B-Instruct
- rombodawg/Rombos-LLM-V2.5-Qwen-72b
- Qwen/Qwen2.5-72B
library_name: transformers
tags:
- mergekit
- merge

---
# EurobeatVARemix-Qwen2.5-72b

[![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/UqQ-TJ8ZgHk02zvO7Oy11.png)](https://www.youtube.com/watch?v=1gW1uHRPChc)

Updated EVA to 0.1. That's all folks!

...It didn't feel right calling it LLENN anymore so I'm changing the name. ["Pray I don't alter it any further."](<https://www.youtube.com/watch?v=WpE_xMRiCLE>)

**Please do not ask for quants, contact others instead.**

*All models are ready for testing on [featherless.ai](https://featherless.ai) as soon as it goes live.*

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) as a base.

### Prompt Format

ChatML works for the most part.

### Sampler Settings

Personally I use the following:

```
Temp: 1.2
Min P: 0.07
Rep Pen: 1.1
```

Others have suggested the following:

```
Temp: 1.1
Top P: 0.98
Min P: 0.05
```

### Models Merged

The following models were included in the merge:
* [ZeusLabs/Chronos-Platinum-72B](https://huggingface.co/ZeusLabs/Chronos-Platinum-72B)
* [EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1)
* [m8than/banana-2-b-72b](https://huggingface.co/m8than/banana-2-b-72b)
* [abacusai/Dracarys2-72B-Instruct](https://huggingface.co/abacusai/Dracarys2-72B-Instruct)
* [rombodawg/Rombos-LLM-V2.5-Qwen-72b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-72b)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
  - model: ZeusLabs/Chronos-Platinum-72B
  - model: abacusai/Dracarys2-72B-Instruct
  - model: rombodawg/Rombos-LLM-V2.5-Qwen-72b
  - model: m8than/banana-2-b-72b

merge_method: model_stock
base_model: Qwen/Qwen2.5-72B
parameters:
  normalize: true
dtype: bfloat16
```