Reminder to use the dev version Transformers:
pip install git+https://github.com/huggingface/transformers.git
Finetune Phi-3, Llama 3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Directly quantized 4bit model with bitsandbytes
. We Mistralfied the model to ensure it could be used on many platforms
We have a Google Colab Tesla T4 notebook for Phi-3 (mini) here: https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing And another notebook for Phi-3 (medium) here: https://colab.research.google.com/drive/1hhdhBa1j_hsymiW9m-WzxQtgqTH_NHqi?usp=sharing
✨ Finetune for Free
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Llama 3 (8B) | ▶️ Start on Colab | 2.4x faster | 58% less |
Gemma 2 (9B) | ▶️ Start on Colab | 2x faster | 63% less |
Mistral (9B) | ▶️ Start on Colab | 2.2x faster | 62% less |
Phi 3 (mini) | ▶️ Start on Colab | 2x faster | 50% less |
Phi 3 (medium) | ▶️ Start on Colab | 2x faster | 50% less |
TinyLlama | ▶️ Start on Colab | 3.9x faster | 74% less |
CodeLlama (34B) A100 | ▶️ Start on Colab | 1.9x faster | 27% less |
Mistral (7B) 1xT4 | ▶️ Start on Kaggle | 5x faster* | 62% less |
DPO - Zephyr | ▶️ Start on Colab | 1.9x faster | 19% less |
- This conversational notebook is useful for ShareGPT ChatML / Vicuna templates.
- This text completion notebook is for raw text. This DPO notebook replicates Zephyr.
- * Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
- Downloads last month
- 7,023
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.