Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: acrastt/Marx-3B-V2
|
3 |
+
datasets:
|
4 |
+
- totally-not-an-llm/EverythingLM-data-V2-sharegpt
|
5 |
+
inference: false
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
library_name: transformers
|
9 |
+
license: apache-2.0
|
10 |
+
model_creator: acrastt
|
11 |
+
model_name: Marx-3B-V2
|
12 |
+
pipeline_tag: text-generation
|
13 |
+
quantized_by: afrideva
|
14 |
+
tags:
|
15 |
+
- gguf
|
16 |
+
- ggml
|
17 |
+
- quantized
|
18 |
+
- q2_k
|
19 |
+
- q3_k_m
|
20 |
+
- q4_k_m
|
21 |
+
- q5_k_m
|
22 |
+
- q6_k
|
23 |
+
- q8_0
|
24 |
+
---
|
25 |
+
# acrastt/Marx-3B-V2-GGUF
|
26 |
+
|
27 |
+
Quantized GGUF model files for [Marx-3B-V2](https://huggingface.co/acrastt/Marx-3B-V2) from [acrastt](https://huggingface.co/acrastt)
|
28 |
+
|
29 |
+
|
30 |
+
| Name | Quant method | Size |
|
31 |
+
| ---- | ---- | ---- |
|
32 |
+
| [marx-3b-v2.fp16.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.fp16.gguf) | fp16 | 6.85 GB |
|
33 |
+
| [marx-3b-v2.q2_k.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q2_k.gguf) | q2_k | 2.15 GB |
|
34 |
+
| [marx-3b-v2.q3_k_m.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q3_k_m.gguf) | q3_k_m | 2.27 GB |
|
35 |
+
| [marx-3b-v2.q4_k_m.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q4_k_m.gguf) | q4_k_m | 2.58 GB |
|
36 |
+
| [marx-3b-v2.q5_k_m.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q5_k_m.gguf) | q5_k_m | 2.76 GB |
|
37 |
+
| [marx-3b-v2.q6_k.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q6_k.gguf) | q6_k | 3.64 GB |
|
38 |
+
| [marx-3b-v2.q8_0.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q8_0.gguf) | q8_0 | 3.64 GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
## Original Model Card:
|
43 |
+
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
|
44 |
+
|
45 |
+
This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [EverythingLM Data V2(ShareGPT format)](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2-sharegpt) for 2 epochs.
|
46 |
+
|
47 |
+
Prompt template:
|
48 |
+
```
|
49 |
+
### HUMAN:
|
50 |
+
{prompt}
|
51 |
+
|
52 |
+
### RESPONSE:
|
53 |
+
<leave a newline for the model to answer>
|
54 |
+
```
|
55 |
+
q4_1 GGML quant available [here](https://huggingface.co/NikolayKozloff/Marx-3B-V2/).</br>
|
56 |
+
q4_1 GGUF quant available [here]( https://huggingface.co/NikolayKozloff/Marx-3B-V2-GGUF/).
|