SicariusSicariiStuff
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -154,13 +154,24 @@ You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
|
|
154 |
|
155 |
# Model Details
|
156 |
|
157 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
158 |
|
159 |
## LLAMA-3_Unaligned is available at the following quantizations:
|
160 |
|
161 |
- FP16: soon...
|
162 |
- EXL2: soon...
|
163 |
- GGUF: soon...
|
|
|
|
|
|
|
|
|
164 |
|
165 |
### Support
|
166 |
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
|
|
|
154 |
|
155 |
# Model Details
|
156 |
|
157 |
+
<details>
|
158 |
+
<summary>This was based on several different models, as well as an abliviated model, which after days of finetuning at different Lora R values are probably no longer even recognizable. The result of this intermediate checkpoint is published under <b>SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha</b>, while this model is now fully fine-tuned instead of just a very deep Lora.</summary>
|
159 |
+
The full fine-tuning is performed on the full LLAMA-3 8k Context. It will not only be used for stacking several different prompts into a total length of 8k but also for using the full context length for single prompts. The training data contains a lot of highly cleaned, highest-quality story writing, and some RP.
|
160 |
+
|
161 |
+
Of course, a massive and deep uncensoring protocol is used, along with giving the model some sass and personality! A lot of effort was poured into this work to ensure the model is not compromised by the deep uncensoring protocol. The goal is to create a model that is highly creative, serving as a writing assistant, co-editor, and having some role play abilities, while still being fairly intelligent, as much as an 8B model can be.
|
162 |
+
|
163 |
+
The most important aspect of this work is to make it fresh, trained on datasets that have never been used in any other model, giving it a truly unique vibe.
|
164 |
+
</details>
|
165 |
|
166 |
## LLAMA-3_Unaligned is available at the following quantizations:
|
167 |
|
168 |
- FP16: soon...
|
169 |
- EXL2: soon...
|
170 |
- GGUF: soon...
|
171 |
+
-
|
172 |
+
## LLAMA-3_8B_Unaligned_Alpha is available at the following quantizations:
|
173 |
+
[FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
|
174 |
+
[GGUFs](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF)
|
175 |
|
176 |
### Support
|
177 |
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
|