Pinkstack commited on
Commit
e17a955
1 Parent(s): 9c0c7cf

Update README.md

Browse files

![BY PINKSTACK DISCORD.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2xMulpuSlZ3C1vpGgsAYi.png)

Files changed (1) hide show
  1. README.md +36 -5
README.md CHANGED
@@ -1,22 +1,53 @@
1
  ---
2
- base_model: unsloth/qwen2.5-0.5b-instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
- - qwen2
8
  - gguf
 
 
 
9
  license: apache-2.0
10
  language:
11
  - en
 
12
  ---
13
 
14
- # Uploaded model
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - **Developed by:** Pinkstack
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
 
2
  tags:
3
  - text-generation-inference
4
  - transformers
5
  - unsloth
 
6
  - gguf
7
+ - reasoning
8
+ - Qwen2
9
+ - Qwen
10
  license: apache-2.0
11
  language:
12
  - en
13
+ pipeline_tag: text-generation
14
  ---
15
 
16
+ ![BY_PINKSTACK.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2xMulpuSlZ3C1vpGgsAYi.png)
17
 
18
+ # 🧀 Which quant is right for you?
19
+
20
+ - ***Q4:*** This model should be used for super low end devices like older phones or older laptops due to its very compact size, quality is okay but fully usable.
21
+ - ***Q6:*** This model should be used on most modern devices, good quality and very quick responses.
22
+ - ***Q8:*** This model should be used on most modern devices Responses are very high quality, but its a little slower than q6
23
+ - ***BF16:*** This Lossless model should only be used if maximum quality is needed; it doesn't perform well speed wise, but text results are high quality.
24
+
25
+ ## Things you should be aware of when using PARM models (Pinkstack Accuracy Reasoning Models) 🧀
26
+
27
+ This PARM is based on Qwen 2.5 0.5B which has gotten extra training parameters so it would have similar outputs to O.1 Mini, We trained with [this](https://huggingface.co/datasets/gghfez/QwQ-LongCoT-130K-cleaned) dataset.
28
+
29
+ To use this model, you must use a service which supports the GGUF file format.
30
+ Additionaly, this is the Prompt Template, it uses the Phi-3 template.
31
+ ```
32
+ {{ if .System }}<|system|>
33
+ {{ .System }}<|end|>
34
+ {{ end }}{{ if .Prompt }}<|user|>
35
+ {{ .Prompt }}<|end|>
36
+ {{ end }}<|assistant|>
37
+ {{ .Response }}<|end|>
38
+ ```
39
+
40
+ Or if you are using an anti prompt: <|end|><|assistant|>
41
+
42
+ Highly recommended to use with a system prompt.
43
+
44
+ # Extra information
45
  - **Developed by:** Pinkstack
46
  - **License:** apache-2.0
47
  - **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit
48
 
49
+ This model was trained using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
50
+
51
+ Used this model? Don't forget to leave a like :)
52
 
53
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)