--- license: apache-2.0 datasets: - HuggingFaceH4/ultrachat_200k language: - en inference: parameters: do_sample: true temperature: 0.1 top_p: 0.14 top_k: 12 max_new_tokens: 250 repetition_penalty: 1.1 base_model: Locutusque/TinyMistral-248M-v2 --- # Description Fine-tuned Locutusque/TinyMistral-248M-v2 on the HuggingFaceH4/ultrachat_200k dataset. # Recommended inference parameters ``` do_sample: true temperature: 0.1 top_p: 0.14 top_k: 12 max_new_tokens: 250 repetition_penalty: 1.1 ``` # Recommended prompt template ``` <|im_start|>user\n{user message}<|im_end|>\n<|im_start|>assistant\n{assistant message}<|endoftext|> ``` # Evaluation This model will be submitted to the Open LLM Leaderboard.