leaderboard-pr-bot's picture
Adding Evaluation Results
591cc4a verified
|
raw
history blame
4.27 kB
---
license: apache-2.0
library_name: transformers
tags:
- trl
- sft
base_model:
- nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
datasets:
- HuggingFaceTB/smoltalk
model-index:
- name: SmolNemo-12B-FFT-experimental
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 33.48
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/SmolNemo-12B-FFT-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 6.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/SmolNemo-12B-FFT-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.23
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/SmolNemo-12B-FFT-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.34
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/SmolNemo-12B-FFT-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/SmolNemo-12B-FFT-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.41
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/SmolNemo-12B-FFT-experimental
name: Open LLM Leaderboard
---
![image/png](https://huggingface.co/nbeerbower/SmolNemo-12B/resolve/main/smolnemo_cover.png?download=true)
> 🧪 **Just Another Model Experiment**
>
> This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!
# SmolNemo-12B-FFT-experimental
[Mahou-1.5-mistral-nemo-12B-lorablated](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) finetuned on [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk).
**This model has erratic behavior and poor performance**
### Method
SFT with 8x A100 for 0.1 epochs.
This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__SmolNemo-12B-FFT-experimental)
| Metric |Value|
|-------------------|----:|
|Avg. | 8.32|
|IFEval (0-Shot) |33.48|
|BBH (3-Shot) | 6.54|
|MATH Lvl 5 (4-Shot)| 0.23|
|GPQA (0-shot) | 1.34|
|MuSR (0-shot) | 5.92|
|MMLU-PRO (5-shot) | 2.41|