license: apache-2.0 | |
datasets: | |
- Locutusque/hercules-v1.0 | |
- Open-Orca/SlimOrca-Dedup | |
language: | |
- en | |
base_model: Locutusque/TinyMistral-248M-v2.5 | |
tags: | |
- chemistry | |
- biology | |
- not-for-all-audiences | |
- merge | |
- code | |
inference: | |
parameters: | |
do_sample: true | |
renormalize_logits: false | |
temperature: 0.8 | |
top_p: 0.14 | |
top_k: 12 | |
min_new_tokens: 2 | |
max_new_tokens: 96 | |
repetition_penalty: 1.15 | |
no_repeat_ngram_size: 5 | |
epsilon_cutoff: 0.002 | |
widget: | |
- text: "<|im_start|>user\nWrite me a Python program that calculates the factorial of n. <|im_end|>\n<|im_start|>assistant\n" | |
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) | |
# QuantFactory/TinyMistral-248M-v2.5-Instruct-GGUF | |
This is quantized version of [Locutusque/TinyMistral-248M-v2.5-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2.5-Instruct) created using llama.cpp | |
# Original Model Card | |
# Model description | |
Fine-tuned Locutusque/TinyMistral-248m-v2.5 on SlimOrca-Dedup and Hercules-v1.0. Averaged a loss of 1.5 during training. This model's performance is excellent considering it's size. | |
This model may output X-rated content. You and you alone are responsible for downloading and using the model and it's outputs. You have been warned. | |
You can use the ChatML prompt format for this model. | |
# Evaluation | |
This model will be submitted to the Open LLM Leaderboard. | |