image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

SmolNemo-12B-FFT-experimental

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on HuggingFaceTB/smoltalk.

This model has erratic behavior and poor performance

Method

SFT with 8x A100 for 0.1 epochs.

This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 8.32
IFEval (0-Shot) 33.48
BBH (3-Shot) 6.54
MATH Lvl 5 (4-Shot) 0.23
GPQA (0-shot) 1.34
MuSR (0-shot) 5.92
MMLU-PRO (5-shot) 2.41
Downloads last month
40
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nbeerbower/SmolNemo-12B-FFT-experimental

Finetuned
(9)
this model
Merges
1 model
Quantizations
1 model

Dataset used to train nbeerbower/SmolNemo-12B-FFT-experimental

Evaluation results