QuantFactory Banner

QuantFactory/GRMR-2B-Instruct-old-GGUF

This is quantized version of qingy2024/GRMR-2B-Instruct-old created using llama.cpp

Original Model Card

Uploaded model

  • Developed by: qingy2024
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-2-2b-bnb-4bit

This fine-tune of Gemma 2 2B is trained to take any input text and repeat it (with fixed grammar). Now fine tuned for 300 steps

Example:

User: Find a clip from a professional production of any musical within the past 50 years. The Tony awards have a lot of great options of performances of Tony nominated performances in the archives on their websites.

GRMR-2B-Instruct: Find a clip from a professional production of any musical within the past 50 years. The Tony Awards have a lot of great options of performances of Tony-nominated performances in their archives on their websites.

Downloads last month
542
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/GRMR-2B-Instruct-old-GGUF

Base model

google/gemma-2-2b
Quantized
(44)
this model