My first quantization, this is a q4_0 GGML(ggjtv3) and GGUFv2 quantization of the model https://huggingface.co/acrastt/OmegLLaMA-3B I hope it's working fine. 🤗

Prompt format:

Interests: {interests}
Conversation:
You: {prompt}
Stranger: 
Downloads last month
89
GGUF
Model size
3.43B params
Architecture
llama

4-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train Aryanne/OmegLLaMA-3B-ggml-and-gguf