maldv's picture
Update README.md
1cb0471 verified
|
raw
history blame
2.59 kB
metadata
license: apache-2.0
library_name: transformers
language:
  - en
tags:
  - chat
  - conversational
base_model:
  - Qwen/Qwen2.5-32B
  - AiCloser/Qwen2.5-32B-AGI
  - EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
  - fblgit/TheBeagle-v2beta-32B-MGS
  - huihui-ai/Qwen2.5-32B-Instruct-abliterated
  - huihui-ai/QwQ-32B-Preview-abliterated
  - Qwen/QwQ-32B-Preview
  - rombodawg/Rombos-LLM-V2.5-Qwen-32b
  - nbeerbower/Qwen2.5-Gutenberg-Doppel-32B

image/png

Qwentile 2.5 32B Instruct

Qwentile 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models:

output_base_model: "Qwen/Qwen2.5-32B"
finetune_merge:
  - { "model": "AiCloser/Qwen2.5-32B-AGI", "base": "Qwen/Qwen2.5-32B", "alpha": 0.3 }
  - { "model": "EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }
  - { "model": "fblgit/TheBeagle-v2beta-32B-MGS", "base": "Qwen/Qwen2.5-32B", "alpha": 0.6 }
  - { "model": "huihui-ai/Qwen2.5-32B-Instruct-abliterated", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 1.0 }
  - { "model": "huihui-ai/QwQ-32B-Preview-abliterated", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0 }
  - { "model": "Qwen/QwQ-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8, "is_input": true }
  - { "model": "rombodawg/Rombos-LLM-V2.5-Qwen-32b", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0, "is_output": true }
  - { "model": "nbeerbower/Qwen2.5-Gutenberg-Doppel-32B", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.4 }

In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the instruct model.

What is this?

I started my experiment because of QwQ is a really nifty model, but it was giving me problems with xml output - which is what I use for my thought tokens. So, I thought... lets just merge it in!

The first model worked pretty well, but I got a sense that the balances could be tweaked. Why not throw in some other models as well for fun and see if I can't run out of disk space in the process?

Initial Results

It's a little crispier than Awqward, but does generate stable output. Since it is based on Qwen2.5 base instead of instruct, maybe it can score over zero on the math leaderboard?

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwentile2.5-32b-instruct,
    title = {Qwentile 2.5 32B Instruct},
    url = {https://huggingface.co/maldv/Qwentile2.5-32B-Instruct},
    author = {Praxis Maldevide},
    month = {December},
    year = {2024}
}