cgus commited on
Commit
a22dd0e
·
verified ·
1 Parent(s): 7c2900e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -2
README.md CHANGED
@@ -5,13 +5,26 @@ license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-
5
  language:
6
  - en
7
  pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-14B-Instruct
9
  tags:
10
  - chat
11
  - abliterated
12
  - uncensored
13
  ---
14
-
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  # huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
16
 
17
 
 
5
  language:
6
  - en
7
  pipeline_tag: text-generation
8
+ base_model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
9
  tags:
10
  - chat
11
  - abliterated
12
  - uncensored
13
  ---
14
+ # Qwen2.5-14B-Instruct-abliterated-v2-exl2
15
+ Model: [Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)
16
+ Made by: [huihui-ai](https://huggingface.co/huihui-ai)
17
+ ## Quants
18
+ [4bpw h6 (main)](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/main)
19
+ [4.5bpw h6](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/4.5bpw-h6)
20
+ [5bpw h6](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/5bpw-h6)
21
+ [6bpw h6](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/6bpw-h6)
22
+ [8bpw h8](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/8bpw-h6)
23
+ ## Quantization notes
24
+ Made with exllamav2 0.2.3 with the default dataset. Exl2 quants can be used with Nvidia RTX2xxx and newer Nvidia GPUs on Windows/Linux or AMD on Linux.
25
+ They can be used with Text-Generation-WebUI, TabbyAPI and some other apps that have exllamav2 loader.
26
+
27
+ # Original model card
28
  # huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
29
 
30