Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,54 @@ language:
|
|
7 |
pipeline_tag: text-generation
|
8 |
tags:
|
9 |
- chat
|
|
|
10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
# QwQ-32B
|
13 |
|
|
|
7 |
pipeline_tag: text-generation
|
8 |
tags:
|
9 |
- chat
|
10 |
+
- qwen
|
11 |
---
|
12 |
+
<div>
|
13 |
+
<p style="margin-bottom: 0; margin-top: 0;">
|
14 |
+
<strong>This is Qwen-QwQ-32B with our bug fixes. <br> See <a href="https://huggingface.co/collections/unsloth/qwen-qwq-32b-collection-676b3b29c20c09a8c71a6235">our collection</a> for versions of QwQ-32B with our bug fixes including GGUF & 4-bit formats.</strong>
|
15 |
+
</p>
|
16 |
+
<p style="margin-bottom: 0;">
|
17 |
+
<em>Unsloth's QwQ-32B<a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
|
18 |
+
</p>
|
19 |
+
<div style="display: flex; gap: 5px; align-items: center; ">
|
20 |
+
<a href="https://github.com/unslothai/unsloth/">
|
21 |
+
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
|
22 |
+
</a>
|
23 |
+
<a href="https://discord.gg/unsloth">
|
24 |
+
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
|
25 |
+
</a>
|
26 |
+
<a href="https://docs.unsloth.ai/">
|
27 |
+
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
|
28 |
+
</a>
|
29 |
+
</div>
|
30 |
+
<h1 style="margin-top: 0rem;">Finetune your own Reasoning model like R1 with Unsloth!</h2>
|
31 |
+
</div>
|
32 |
+
|
33 |
+
We have a free Google Colab notebook for turning Qwen2.5 (3B) into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(3B)-GRPO.ipynb
|
34 |
+
|
35 |
+
|
36 |
+
## ✨ Finetune for Free
|
37 |
+
|
38 |
+
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
|
39 |
+
|
40 |
+
| Unsloth supports | Free Notebooks | Performance | Memory use |
|
41 |
+
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
|
42 |
+
| **GRPO with Phi-4** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 2x faster | 80% less |
|
43 |
+
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
|
44 |
+
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
|
45 |
+
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
|
46 |
+
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
|
47 |
+
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
|
48 |
+
| **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
|
49 |
+
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
|
50 |
+
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
|
51 |
+
|
52 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
|
53 |
+
|
54 |
+
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
|
55 |
+
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
|
56 |
+
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
57 |
+
|
58 |
|
59 |
# QwQ-32B
|
60 |
|