Edit model card

Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF

Quantized GGUF model files for LocutusqueXFelladrin-TinyMistral248M-Instruct from Locutusque

Original Model Card:

LocutusqueXFelladrin-TinyMistral248M-Instruct

This model was created by merging Locutusque/TinyMistral-248M-Instruct and Felladrin/TinyMistral-248M-SFT-v4 using mergekit. After the two models were merged, the resulting model was further trained on ~20,000 examples on the Locutusque/inst_mix_v2_top_100k at a low learning rate to further normalize weights. The following is the YAML config used to merge:

models:
  - model: Felladrin/TinyMistral-248M-SFT-v4
    parameters:
      weight: 0.5
  - model: Locutusque/TinyMistral-248M-Instruct
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16

The resulting model combines the best of both worlds. With Locutusque/TinyMistral-248M-Instruct's coding capabilities and reasoning skills, and Felladrin/TinyMistral-248M-SFT-v4's low hallucination and instruction-following capabilities. The resulting model has an incredible performance considering its size.

Evaluation

Coming soon...

Downloads last month
133
GGUF
Model size
248M params
Architecture
llama
+1
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from

Dataset used to train afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF