The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots, totally-not-an-llm/EverythingLM-data-V3, mlabonne/guanaco-llama2-1k, OpenAssistant/oasst_top1_2023-08-25, and garage-bAInd/Open-Platypus. This model achieved a test loss of 0.38 on garage-bAInd/Open-Platypus.

Make sure to comply with the microsoft research license. Please read it before using this model.

This model was trained on the ChatML prompt template.

The responses seen in the inference API were generated using the following sampling parameters:

temperature = 0.1

top_p = 0.14

top_k = 41

repetition_penalty = 1.176

Downloads last month
2,167
Safetensors
Model size
13B params
Tensor type
BF16
·
Inference Examples
This model is not currently available via any of the supported Inference Providers.

Model tree for Locutusque/Orca-2-13b-SFT_v5

Finetuned
(4)
this model
Quantizations
4 models

Datasets used to train Locutusque/Orca-2-13b-SFT_v5