Rubra Phi-3 Mini 128k Instruct GGUF
Original model: rubra-ai/Phi-3-mini-128k-instruct
Model description
The model is the result of further post-training microsoft/Phi-3-mini-128k-instruct. This model is designed for high performance in various instruction-following tasks and complex interactions, including multi-turn function calling and detailed conversations.
Model | Function Calling | MMLU | GPQA | GSM-8K | MATH | MT-bench | Win | Loss | Tie | Win Rate | Loss Rate | Adjusted Win Rate |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Phi-3 Mini 128k Instruct (June) | - | 69.36 | 27.01 | 83.7 | 32.92 | 8.02 | 21 | 72 | 67 | 0.13125 | 0.45000 | 0.340625 |
Rubra Enhanced Phi-3 Mini 128k Instruct (June) | 70.00% | 67.87 | 29.69 | 79.45 | 30.80 | 8.21 | 72 | 21 | 67 | 0.45000 | 0.13125 | 0.659375 |
Phi-3 Mini 128k Instruct (April) | - | 68.17 | 25.90 | 80.44 | 28.12 | 7.92 | 51 | 45 | 64 | 0.31875 | 0.28125 | 0.51875 |
Rubra Enhanced Phi-3 Mini 128k Instruct (April) | 65.71% | 66.66 | 29.24 | 74.09 | 26.84 | 7.45 | 45 | 51 | 64 | 0.28125 | 0.31875 | 0.48125 |
- Commit
e2ecb24bd9dae689bb30dafcf13cbbc9dbddead5
is the last commit to have the April-based Phi-3 model. The latest in main is built off the June model
Training Data
The model underwent additional training on a proprietary dataset encompassing diverse instruction-following, chat, and function calling data. This post-training process enhances the model's ability to integrate tools and manage complex interaction scenarios effectively.
How to use
Refer to https://docs.rubra.ai/inference/llamacpp for usage. Feel free to ask/open issues up in our Github repo: https://github.com/rubra-ai/rubra
Limitations and Bias
While the model performs well on a wide range of tasks, it may still produce biased or incorrect outputs. Users should exercise caution and critical judgment when using the model in sensitive or high-stakes applications. The model's outputs are influenced by the data it was trained on, which may contain inherent biases.
Ethical Considerations
Users should ensure that the deployment of this model adheres to ethical guidelines and consider the potential societal impact of the generated text. Misuse of the model for generating harmful or misleading content is strongly discouraged.
Acknowledgements
We would like to thank Microsoft for the model.
Contact Information
For questions or comments about the model, please reach out to the rubra team.
Citation
If you use this work, please cite it as:
@misc {rubra_ai_2024,
author = { Sanjay Nadhavajhala and Yingbei Tong },
title = { Phi-3-mini-128k-instruct },
year = 2024,
url = { https://huggingface.co/rubra-ai/Phi-3-mini-128k-instruct },
doi = { 10.57967/hf/2682 },
publisher = { Hugging Face }
}
- Downloads last month
- 732
Collection including rubra-ai/Phi-3-mini-128k-instruct-GGUF
Evaluation results
- 5-shot on MMLUself-reported67.870
- 0-shot on GPQAself-reported29.690
- 8-shot, CoT on GSM-8Kself-reported79.450
- 4-shot, CoT on MATHself-reported30.800
- GPT-4 as Judge on MT-benchself-reported8.210