Marx-3B-V2-GGUF / README.md
afrideva's picture
Upload README.md with huggingface_hub
51cb582
metadata
base_model: acrastt/Marx-3B-V2
datasets:
  - totally-not-an-llm/EverythingLM-data-V2-sharegpt
inference: false
language:
  - en
library_name: transformers
license: apache-2.0
model_creator: acrastt
model_name: Marx-3B-V2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

acrastt/Marx-3B-V2-GGUF

Quantized GGUF model files for Marx-3B-V2 from acrastt

Name Quant method Size
marx-3b-v2.fp16.gguf fp16 6.85 GB
marx-3b-v2.q2_k.gguf q2_k 2.15 GB
marx-3b-v2.q3_k_m.gguf q3_k_m 2.27 GB
marx-3b-v2.q4_k_m.gguf q4_k_m 2.58 GB
marx-3b-v2.q5_k_m.gguf q5_k_m 2.76 GB
marx-3b-v2.q6_k.gguf q6_k 3.64 GB
marx-3b-v2.q8_0.gguf q8_0 3.64 GB

Original Model Card:

Buy Me A Coffee

This is OpenLLaMA 3B V2 finetuned on EverythingLM Data V2(ShareGPT format) for 2 epochs.

Prompt template:

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

q4_1 GGML quant available here.
q4_1 GGUF quant available here.