|
--- |
|
license: other |
|
language: |
|
- en |
|
- fr |
|
- es |
|
- hi |
|
- zh |
|
- code |
|
base_model: microsoft/Orca-2-13b |
|
datasets: |
|
- HuggingFaceH4/no_robots |
|
- mlabonne/guanaco-llama2-1k |
|
- OpenAssistant/oasst_top1_2023-08-25 |
|
- totally-not-an-llm/EverythingLM-data-V3 |
|
- LDJnr/Pure-Dove |
|
- LDJnr/Capybara |
|
- LDJnr/LessWrong-Amplify-Instruct |
|
- LDJnr/Verified-Camel |
|
--- |
|
The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots, totally-not-an-llm/EverythingLM-data-V3, LDJnr/Capybara, LDJnr/Pure-Dove, LDJnr/LessWrong-Amplify-Instruct, LDJnr/Verified-Camel, mlabonne/guanaco-llama2-1k, and OpenAssistant/oasst_top1_2023-08-25. This model achieved a test loss of 0.39 on LDJnr/Verified-Camel. |
|
|
|
Make sure to comply with the microsoft research license. Please read it before using this model. |
|
|
|
This model was trained on the ChatML prompt template. |