anilbhatt1/phi2-oasst-guanaco-bf16-custom-GGUF
Quantized GGUF model files for phi2-oasst-guanaco-bf16-custom from anilbhatt1
Name | Quant method | Size |
---|---|---|
phi2-oasst-guanaco-bf16-custom.fp16.gguf | fp16 | 5.56 GB |
phi2-oasst-guanaco-bf16-custom.q2_k.gguf | q2_k | 1.17 GB |
phi2-oasst-guanaco-bf16-custom.q3_k_m.gguf | q3_k_m | 1.48 GB |
phi2-oasst-guanaco-bf16-custom.q4_k_m.gguf | q4_k_m | 1.79 GB |
phi2-oasst-guanaco-bf16-custom.q5_k_m.gguf | q5_k_m | 2.07 GB |
phi2-oasst-guanaco-bf16-custom.q6_k.gguf | q6_k | 2.29 GB |
phi2-oasst-guanaco-bf16-custom.q8_0.gguf | q8_0 | 2.96 GB |
Original Model Card:
Finetuned microsoft-phi2 model
- microsoft-phi2 model finetuned on "timdettmers/openassistant-guanaco" dataset with qlora technique
- Will run on a colab T4 gpu
- Downloads last month
- 75
Model tree for afrideva/phi2-oasst-guanaco-bf16-custom-GGUF
Base model
anilbhatt1/phi2-oasst-guanaco-bf16-custom