Upload README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ datasets:
|
|
6 |
inference: false
|
7 |
language:
|
8 |
- en
|
9 |
-
license:
|
10 |
model_creator: Ryan Witzman
|
11 |
model_name: Go Bruins v2
|
12 |
model_type: mistral
|
@@ -116,7 +116,7 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
116 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
117 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
118 |
| [go-bruins-v2.Q2_K.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
|
119 |
-
| [go-bruins-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 3.
|
120 |
| [go-bruins-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
|
121 |
| [go-bruins-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
|
122 |
| [go-bruins-v2.Q4_0.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
@@ -395,6 +395,17 @@ Note: The original MMLU evaluation has been corrected to include 5-shot data rat
|
|
395 |
For any inquiries or feedback, reach out to Ryan Witzman on Discord: `rwitz_`.
|
396 |
|
397 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
398 |
|
399 |
*This model card was created with care by Ryan Witzman.*
|
400 |
rewrite this model card for new version called go-bruins-v2 that is finetuned on dpo on the original go-bruins model on athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
|
|
|
6 |
inference: false
|
7 |
language:
|
8 |
- en
|
9 |
+
license: cc-by-nc-4.0
|
10 |
model_creator: Ryan Witzman
|
11 |
model_name: Go Bruins v2
|
12 |
model_type: mistral
|
|
|
116 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
117 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
118 |
| [go-bruins-v2.Q2_K.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
|
119 |
+
| [go-bruins-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
|
120 |
| [go-bruins-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
|
121 |
| [go-bruins-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
|
122 |
| [go-bruins-v2.Q4_0.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
|
|
395 |
For any inquiries or feedback, reach out to Ryan Witzman on Discord: `rwitz_`.
|
396 |
|
397 |
---
|
398 |
+
## Citations
|
399 |
+
```
|
400 |
+
@misc{unacybertron7b,
|
401 |
+
title={Cybertron: Uniform Neural Alignment},
|
402 |
+
author={Xavier Murias},
|
403 |
+
year={2023},
|
404 |
+
publisher = {HuggingFace},
|
405 |
+
journal = {HuggingFace repository},
|
406 |
+
howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16}},
|
407 |
+
}
|
408 |
+
```
|
409 |
|
410 |
*This model card was created with care by Ryan Witzman.*
|
411 |
rewrite this model card for new version called go-bruins-v2 that is finetuned on dpo on the original go-bruins model on athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
|