File size: 2,803 Bytes
6fcdc9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1cb0471
 
40518ce
 
6fcdc9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c107c3
6fcdc9a
 
 
 
 
 
 
 
 
cb3b365
40518ce
37cc994
6fcdc9a
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- chat
- conversational
base_model:
- Qwen/Qwen2.5-32B
- AiCloser/Qwen2.5-32B-AGI
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- fblgit/TheBeagle-v2beta-32B-MGS
- huihui-ai/Qwen2.5-32B-Instruct-abliterated
- huihui-ai/QwQ-32B-Preview-abliterated
- Qwen/QwQ-32B-Preview
- rombodawg/Rombos-LLM-V2.5-Qwen-32b
- nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/sF7RDZA7lFYOmGy4bGy1s.png)

[imat quants](https://huggingface.co/mradermacher/Qwentile2.5-32B-Instruct-i1-GGUF)

# Qwentile 2.5 32B Instruct

Qwentile 2.5 32B Instruct is a *normalized denoised fourier interpolation* of the following models:

```yaml
output_base_model: "Qwen/Qwen2.5-32B"
finetune_merge:
  - { "model": "AiCloser/Qwen2.5-32B-AGI", "base": "Qwen/Qwen2.5-32B", "alpha": 0.3 }
  - { "model": "EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }
  - { "model": "fblgit/TheBeagle-v2beta-32B-MGS", "base": "Qwen/Qwen2.5-32B", "alpha": 0.6 }
  - { "model": "huihui-ai/Qwen2.5-32B-Instruct-abliterated", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 1.0 }
  - { "model": "huihui-ai/QwQ-32B-Preview-abliterated", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0 }
  - { "model": "Qwen/QwQ-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8, "is_input": true }
  - { "model": "rombodawg/Rombos-LLM-V2.5-Qwen-32b", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0, "is_output": true }
  - { "model": "nbeerbower/Qwen2.5-Gutenberg-Doppel-32B", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.4 }
```

In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model.

### What is this?

I started my experiment because of QwQ is a really nifty model, but it was giving me problems with xml output - which is what I use for my thought tokens. So, I thought... lets just merge it in!

The first model worked pretty well, but I got a sense that the balances could be tweaked. Why not throw in some other models as well for fun and see if I can't run out of disk space in the process?

### Initial Results

It's a little crispier than Awqward, but does generate stable output. Since it is based on Qwen2.5 base instead of instruct it did not fail the math test, it scores with models twice it's size:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/Yjln2MIh15loleJR7EpbL.png)

## Citation

If you find our work helpful, feel free to give us a cite.

```
@misc{qwentile2.5-32b-instruct,
    title = {Qwentile 2.5 32B Instruct},
    url = {https://huggingface.co/maldv/Qwentile2.5-32B-Instruct},
    author = {Praxis Maldevide},
    month = {December},
    year = {2024}
}
```