GGUF
Inference Endpoints
conversational
Edit model card

QuantFactory/Gemma-2-2B-Stheno-Filtered-GGUF

This is quantized version of SaisExperiments/Gemma-2-2B-Stheno-Filtered created using llama.cpp

Original Model Card

image/png

I don't have anything else so you get a cursed cat image

Basic info

This is anthracite-org/stheno-filtered-v1.1 over unsloth/gemma-2-2b-it

It saw 76.6M tokens

This time it took 14 hours and i'm pretty sure i've been training with the wrong prompt template X-X

Training config:

cutoff_len: 1024
dataset: stheno-3.4
dataset_dir: data
ddp_timeout: 180000000
do_train: true
finetuning_type: lora
flash_attn: auto
fp16: true
gradient_accumulation_steps: 8
include_num_input_tokens_seen: true
learning_rate: 5.0e-05
logging_steps: 5
lora_alpha: 64
lora_dropout: 0
lora_rank: 64
lora_target: all
lr_scheduler_type: cosine
max_grad_norm: 1.0
max_samples: 100000
model_name_or_path: unsloth/gemma-2-2b-it
num_train_epochs: 3.0
optim: adamw_8bit
output_dir: saves/Gemma-2-2B-Chat/lora/stheno
packing: false
per_device_train_batch_size: 2
plot_loss: true
preprocessing_num_workers: 16
quantization_bit: 4
quantization_method: bitsandbytes
report_to: none
save_steps: 100
stage: sft
template: gemma
use_unsloth: true
warmup_steps: 0
Downloads last month
73
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/Gemma-2-2B-Stheno-Filtered-GGUF

Base model

google/gemma-2-2b
Quantized
(110)
this model

Dataset used to train QuantFactory/Gemma-2-2B-Stheno-Filtered-GGUF