maddes8cht commited on
Commit
5d97f7c
1 Parent(s): 5bfeaee

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +55 -4
README.md CHANGED
@@ -1,11 +1,62 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- ![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)
 
5
  ## I am still building the structure of these descriptions.
6
- These will carry increasingly more content to help find the best models for a purpose.
7
 
8
- This is a gguf quantized version of
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
- https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b/edit/main/README.md
 
 
 
11
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
5
+
6
  ## I am still building the structure of these descriptions.
 
7
 
8
+ These will contain increasingly more content to help find the best models for a purpose.
9
+
10
+ # WizardLM-Uncensored-Falcon-40b - GGUF
11
+ - Model creator: [ehartford](https://huggingface.co/ehartford)
12
+ - Original model: [WizardLM-Uncensored-Falcon-40b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b)
13
+
14
+
15
+
16
+ # About GGUF format
17
+
18
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
19
+ A growing list of Software is using it and can therefore use this model.
20
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
21
+
22
+ # Quantization variants
23
+
24
+ There is a bunch of quantized files available. How to choose the best for you:
25
+
26
+ # legacy quants
27
+
28
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
29
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
30
+ Falcon 7B models cannot be quantized to K-quants.
31
+
32
+ # K-quants
33
+
34
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
35
+ So, if possible, use K-quants.
36
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
37
+
38
+
39
+
40
+ # Original Model Card:
41
+ This is WizardLM trained on top of tiiuae/falcon-40b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
42
+
43
+ Shout out to the open source AI/ML community, and everyone who helped me out.
44
+
45
+ Note:
46
+ An uncensored model has no guardrails.
47
+ You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
48
+
49
+ Prompt format is WizardLM.
50
 
51
+ ```
52
+ What is a falcon? Can I keep one as a pet?
53
+ ### Response:
54
+ ```
55
 
56
+ Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!<center>
57
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
58
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
59
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
60
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
61
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
62
+ </center>