GGUF quantized version of Phi-3 Model (128k-instruct mini)

project original source (base model)

Q_2 (not nice)

Q_3 (acceptable)

Q_4 family is recommanded (good for running with CPU as well)

Q_5 (good in general)

Q_6 is good also; if you want a better result; take this one instead of Q_5

Q_8 which is very good; need a reasonable size of RAM otherwise you might expect a long wait

16-bit and 32-bit are also provided here for research perspectives; since the file size (16bit) is similar to the original safetensors; once you have a GPU, go ahead with the safetensors, pretty much the same

how to run it

use any connector for interacting with gguf; i.e., gguf-connector

welcome to ai era

Downloads last month
10
GGUF
Model size
3.82B params
Architecture
phi3

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for calcuis/phi3

Quantized
(50)
this model