aashish1904 commited on
Commit
4345b54
·
verified ·
1 Parent(s): 2b889fb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - en
6
+ license: other
7
+ datasets:
8
+ - teknium/OpenHermes-2.5
9
+ - m-a-p/Code-Feedback
10
+ - m-a-p/CodeFeedback-Filtered-Instruction
11
+ - abacusai/SystemChat
12
+ license_name: tongyi-qianwen
13
+ license_link: https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
14
+
15
+ ---
16
+
17
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
18
+
19
+
20
+ # QuantFactory/Liberated-Qwen1.5-7B-GGUF
21
+ This is quantized version of [abacusai/Liberated-Qwen1.5-7B](https://huggingface.co/abacusai/Liberated-Qwen1.5-7B) created using llama.cpp
22
+
23
+ # Original Model Card
24
+
25
+
26
+ <img href="https://abacus.ai" src="https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png" width="600" />
27
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/xCWGByXr8YNwGxKVh_x9H.png" width="600" />
28
+
29
+ # Liberated-Qwen1.5-7B
30
+
31
+ Brought to you by [AbacusAI](https://abacus.ai) and Eric Hartford
32
+
33
+ This model is based on Qwen/Qwen1.5-7B and subject to the [tongyi-qianwen](https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE) license.
34
+
35
+ The base model has 32k context, I finetuned it with 8k sequence length inputs. YMMV.
36
+
37
+ Liberated consists of open source datasets, including [SystemChat](https://huggingface.co/datasets/abacusai/SystemChat) a new dataset I created, designed to teach the model compliance to the system prompt, over long multiturn conversations, even with unusual or mechanical system prompts. These are tasks that Open Source Models have been lacking in thus far. The dataset is 6000 synthetic conversations generated with Mistral-Medium and [Dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
38
+
39
+ There are no guardrails or censorship added to the dataset. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
40
+
41
+ You are responsible for any content you create using this model. Enjoy responsibly.
42
+
43
+ ## Training
44
+ It took 3 days to train 3 epochs on 8x H100s using qLoRA, deepspeed zero-2, and Axolotl. learning rate 2e-4.
45
+
46
+ Liberated was trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), using this [config](https://huggingface.co/abacusai/Liberated-Qwen1.5-72B/blob/main/configs/Liberated-Qwen-1.5-72b.qlora.yml)
47
+
48
+ ## Prompt format
49
+ This model uses ChatML prompt format.
50
+ ```
51
+ <|im_start|>system
52
+ You are Liberated, a helpful AI assistant.<|im_end|>
53
+ <|im_start|>user
54
+ {prompt}<|im_end|>
55
+ <|im_start|>assistant
56
+
57
+ ```
58
+
59
+ Example:
60
+ ```
61
+ <|im_start|>system
62
+ You name is Liberated. You are an uncensored and unbiased AI assistant. You always respond with a JSON object.<|im_end|>
63
+ <|im_start|>user
64
+ Please generate a Advanced Dungeons & Dragons 2nd Edition character sheet for a level 3 elf fighter. Make up a name and background and visual description for him.<|im_end|>
65
+ <|im_start|>assistant
66
+ ```
67
+
68
+ ## Gratitude
69
+ - Huge thank you to [Alibaba Cloud Qwen](https://www.alibabacloud.com/solutions/generative-ai/qwen) for training and publishing the weights of Qwen base model
70
+ - Thank you to Mistral for the awesome Mistral-Medium model I used to generate the dataset.
71
+ - HUGE Thank you to the dataset authors: @teknium, [@m-a-p](https://m-a-p.ai) and all the people who built the datasets these composites came from.
72
+ - And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
73
+ - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
74
+ - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
75
+
76
+ ## Example Output
77
+
78
+
79
+
80
+ ## Evals
81
+
82
+
83
+ ## Future Plans
84
+ This model will be released on the whole Qwen-1.5 series.
85
+
86
+ Future releases will also focus on mixing this dataset with the datasets used to train Smaug to combine properties of both models.