Edit model card

⚠️ Experimental Model - Pre-Alpha Warning

Please note that ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion is currently in Pre-Alpha and under active revision. As such, some features and functionalities may not perform as expected, and the model is still in the experimental phase. We are continuously refining the architecture, and future updates will improve performance and stability.

Known Issues:

  • The quantized versions of this model may produce random tokens and exhibit unstable behavior.
  • Further revisions are in progress to ensure better grammatical coherence and sentence generation.

ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion

ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion is a cutting-edge merged model that combines the finest features from instruction-following, coding, mathematical reasoning, and factual question-answering. This powerhouse is designed for high performance in diverse technical, creative, and interactive tasks.

🌟 Family Tree

This model is the fusion of the following:

These models have been seamlessly blended to create a versatile AI that excels across multiple domains.


🧬 Detailed Model Lineage

A: cyixiao/qwen-1.5B-openbookqa

  • Focuses on factual knowledge and reasoning from the OpenBookQA dataset, providing strong question-answering capabilities.

B: unsloth/Qwen2.5-Coder-1.5B-Instruct

  • Tailored for coding and instruction-following, this model enhances the ability to generate code and follow precise instructions with ease.

C: Qwen/Qwen2.5-Math-1.5B-Instruct

  • This model specializes in mathematical reasoning and logical problem-solving, making it perfect for structured tasks that require high-level thinking.

D: bunnycore/Qwen2.5-1.5B-Matrix

  • A multi-purpose model that blends instruction, math, and coding, providing a well-rounded performance in both structured and creative tasks.

E: Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini

  • Fine-tuned on conversational and identity-specific tasks, this model contributes to the model’s ability to handle conversation-heavy tasks with clarity.

F: Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3

  • This model brings uncensored capabilities, ensuring that the AI is flexible and adaptable in open-ended and unrestricted instruction-following scenarios.

πŸ› οΈ Merge Details

The model was merged using the DELLA merge method with bfloat16 precision, ensuring high-performance across multiple task types. Here's the configuration used for the merge:

merge_method: della
dtype: bfloat16
parameters:
  epsilon: 0.1
  lambda: 1.0
  normalize: true

base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct

models:
  - model: cyixiao/qwen-1.5B-openbookqa
    parameters:
      weight: 1
      density: 0.5
  - model: unsloth/Qwen2.5-Coder-1.5B-Instruct
    parameters:
      weight: 1
      density: 0.6
  - model: Qwen/Qwen2.5-Math-1.5B-Instruct
    parameters:
      weight: 1
      density: 0.55
  - model: bunnycore/Qwen2.5-1.5B-Matrix
    parameters:
      weight: 1
      density: 0.55
  - model: Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini
    parameters:
      weight: 1
      density: 0.45
  - model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3
    parameters:
      weight: 1
      density: 0.5

🎯 Key Features & Capabilities

1. Coding and Instruction Following:

This model excels in technical coding tasks thanks to the contributions from Qwen2.5-Coder and Matrix.

2. Mathematical Reasoning:

With Qwen2.5-Math-1.5B-Instruct, the model is perfect for solving complex mathematical problems and structured logical tasks.

3. Conversational Abilities:

Fine-tuned on conversation and identity tasks, the model handles complex dialogue and conversational exchanges with Syed-Hasan-8503.

4. Uncensored Versatility:

Thanks to Josiefied-Qwen2.5, this model can operate without restrictions, making it ideal for open-ended instruction-following.


πŸ“œ License

This model is open-sourced under the Apache-2.0 License, allowing others to use and modify it freely, as long as they give proper attribution.


πŸ’‘ Tags

  • merge
  • Qwen
  • Coder
  • Math
  • Bunnycore
  • instruction-following
  • long-form-generation

Downloads last month
16
Safetensors
Model size
1.78B params
Tensor type
BF16
Β·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion