NickyNicky/Phi-3-mini-128k-instruct_function-GGUF
Quantized GGUF model files for Phi-3-mini-128k-instruct_function from NickyNicky
Name | Quant method | Size |
---|---|---|
phi-3-mini-128k-instruct_function.fp16.gguf | fp16 | 7.64 GB |
phi-3-mini-128k-instruct_function.q2_k.gguf | q2_k | 1.42 GB |
phi-3-mini-128k-instruct_function.q3_k_m.gguf | q3_k_m | 1.96 GB |
phi-3-mini-128k-instruct_function.q4_k_m.gguf | q4_k_m | 2.39 GB |
phi-3-mini-128k-instruct_function.q5_k_m.gguf | q5_k_m | 2.82 GB |
phi-3-mini-128k-instruct_function.q6_k.gguf | q6_k | 3.14 GB |
phi-3-mini-128k-instruct_function.q8_0.gguf | q8_0 | 4.06 GB |
Original Model Card:
- Downloads last month
- 51
Model tree for afrideva/Phi-3-mini-128k-instruct_function-GGUF
Base model
NickyNicky/Phi-3-mini-128k-instruct_function