This is rhysjones/phi-2-orange-v2, quantized with the help of an importance matrix so it could offer better performance for being quantized, and have quantization levels available for lower-memory devices to run.
Kalomaze's "groups_merged.txt" was used for the importance matrix, with context set to 2,048.
Here's a chart that provides an approximation of the HellaSwag score (out of 1,000 tasks). Thanks to the randomization of tasks, it may be slightly unprecise:
Quantization | HellaSwag |
---|---|
IQ1_S | 32.5% |
IQ2_XXS | 56.3% |
IQ2_XS | 64.7% |
IQ2_S | 67.0% |
IQ2_M | 69.1% |
Q2_K_S | 65.3% |
Q2_K | 69.2% |
IQ3_XXS | Untested |
IQ3_XS | Untested |
IQ3_S | Untested |
IQ3_M | Untested |
Q3_K_M | 73.8% |
IQ4_XS | 74.0% |
IQ4_NL | 73.6% |
Q4_0 | 74.1% |
Q4_K_M | 74.4% |
Q5_K_M | Untested |
Original model card below.
Phi-2 Orange Version 2
A two-step finetune of Phi-2, with a bit more zest.
This is an improved version of the original Phi-2-Orange that uses an updated training process on the same datasets.
It also uses the latest updated model from Microsoft's Phi-2, making it directly usable within Hugging Face's Transformers library (without the need for trust remote code).
Prompt Format
Phi-2 Orange v2 uses ChatML as the prompt format.
(Update 12th March 2024: fixed eos_token issue)
It's recommended to always prompt with a system instruction (use whatever system prompt you like):
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
For example, if you find the model's output to be overly verbose, instruct it to be short and concise:
<|im_start|>system
You are a helpful assistant. Be short and direct in your answers.<|im_end|>
<|im_start|>user
Was Tom Hanks in the movie Forrest Gump? If so, who did he play and give details of the plot.<|im_end|>
<|im_start|>assistant
Evaluations
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Average | 63.67 |
AI2 Reasoning Challenge (25-Shot) | 61.86 |
HellaSwag (10-Shot) | 76.32 |
MMLU (5-Shot) | 55.72 |
TruthfulQA (0-shot) | 54.84 |
Winogrande (5-shot) | 75.69 |
GSM8k (5-shot) | 57.62 |
YALL - Yet Another LLM Leaderboard
Evaluation from mlabonne's alternative LLM leaderboard:
Metric | Value |
---|---|
Average | 49.64 |
AGIEval | 34.55 |
GPT4All | 70.96 |
TruthfulQA | 54.87 |
Bigbench | 38.17 |
Limitations
This model shares the same limitations as the underlying Phi-2 model, details of which are found here.
- Downloads last month
- 597
Datasets used to train Crataco/phi-2-orange-v2-imatrix-GGUF
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard61.860
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard76.320
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard55.720
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard54.840
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard75.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard57.620