Edit model card

acrastt/Marx-3B-V2-GGUF

Quantized GGUF model files for Marx-3B-V2 from acrastt

Name Quant method Size
marx-3b-v2.fp16.gguf fp16 6.85 GB
marx-3b-v2.q2_k.gguf q2_k 2.15 GB
marx-3b-v2.q3_k_m.gguf q3_k_m 2.27 GB
marx-3b-v2.q4_k_m.gguf q4_k_m 2.58 GB
marx-3b-v2.q5_k_m.gguf q5_k_m 2.76 GB
marx-3b-v2.q6_k.gguf q6_k 3.64 GB
marx-3b-v2.q8_0.gguf q8_0 3.64 GB

Original Model Card:

Buy Me A Coffee

This is OpenLLaMA 3B V2 finetuned on EverythingLM Data V2(ShareGPT format) for 2 epochs.

Prompt template:

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

q4_1 GGML quant available here.
q4_1 GGUF quant available here.

Downloads last month
103
GGUF
Model size
3.43B params
Architecture
llama
+1
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from

Dataset used to train afrideva/Marx-3B-V2-GGUF