Edit model card

Phi-2 Super (SFT + cDPO)

image/png

Description

This repo contains GGUF format model files for abacaj's Phi-2 Super

Quantization types

Since the model is relatively very small, I recommend the larger quantizations.

quantization method bits description recommended
Q2_K 2 smallest, significant quality loss โŒ
Q3_K_S 3 very small, high quality loss โŒ
Q3_K_M 3 very small, high quality loss โŒ
Q3_K_L 3 small, substantial quality loss โŒ
Q4_0 4 legacy; small, very high quality loss โŒ
Q4_K_M 4 medium, balanced quality โŒ
Q5_0 5 legacy; medium, balanced quality โŒ
Q5_K_S 5 large, low quality loss โœ…
Q5_K_M 5 large, very low quality loss โœ…
Q6_K 6 very large, extremely low quality loss โŒ
Q8_0 8 very large, extremely low quality loss โŒ
FP16 16 enormous, negligible quality loss โŒ

Phi-2-super (SFT + cDPO)

Base Model: microsoft/phi-2

Chat template

The model uses the same chat template as found in Mistral instruct models:

text = "<|endoftext|>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!<|endoftext|> "
"[INST] Do you have mayonnaise recipes? [/INST]"

MT-bench / heval

image/png image/png

Downloads last month
190
GGUF
Model size
2.78B params
Architecture
phi2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.