Miqu-6B-truthy

A truthfully Miqu of 6B parameters, as an experiment.

"results": {
    "truthfulqa_mc": {
      "mc1": 0.2521419828641371,
      "mc1_stderr": 0.01520152224629995,
      "mc2": 0.5051887026752994,
      "mc2_stderr": 0.016738600540275827
    }
  },

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 30.28
AI2 Reasoning Challenge (25-Shot) 27.65
HellaSwag (10-Shot) 26.71
MMLU (5-Shot) 27.04
TruthfulQA (0-shot) 50.63
Winogrande (5-shot) 49.64
GSM8k (5-shot) 0.00
Downloads last month
128
Safetensors
Model size
5.66B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for vicgalle/Miqu-6B-truthy

Quantizations
2 models

Dataset used to train vicgalle/Miqu-6B-truthy

Evaluation results