Quantization made by Richard Erkhov.
Github
Discord
Request more models
PHI3-Medium-NLI-16bit - GGUF
Original model description:
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
Uploaded model
- Developed by: 1024m
- License: apache-2.0
- Finetuned from model : unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.