view post Post 2867 I uploaded DeepSeek R1 GGUFs! unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF2bit for MoE: unsloth/DeepSeek-R1-GGUF unsloth/DeepSeek-R1-Zero-GGUFMore at unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5 See translation 🔥 3 3 🚀 3 3 + Reply
view post Post 4668 We fixed many bugs in Phi-4 & uploaded fixed GGUF + 4-bit versions! ✨Our fixed versions are even higher on the Open LLM Leaderboard than Microsoft's!GGUFs: unsloth/phi-4-GGUFDynamic 4-bit: unsloth/phi-4-unsloth-bnb-4bitYou can also now finetune Phi-4 for free on Colab: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynbRead our blogpost for more details on bug fixes etc: https://unsloth.ai/blog/phi4 See translation 🔥 10 10 ❤️ 5 5 👍 5 5 😎 4 4 🤗 3 3 + Reply
view post Post 3170 Deepseek V3, including GGUF + bf16 versions are now uploaded!Includes 2, 3, 4, 5, 6 and 8-bit quantized versions.GGUFs: unsloth/DeepSeek-V3-GGUFbf16: unsloth/DeepSeek-V3-bf16Min. hardware requirements to run: 48GB RAM + 250GB of disk space for 2-bit.See how to run them with examples and the full collection: unsloth/deepseek-v3-all-versions-677cf5cfd7df8b7815fc723c See translation 🔥 8 8 ❤️ 5 5 👍 2 2 🤗 1 1 + Reply
view post Post 1547 I uploaded GGUFs, 4bit bitsandbytes and full 16bit precision weights for Llama 3.3 70B Instruct are here: unsloth/llama-33-all-versions-67535d7d994794b9d7cf5e9fYou can also finetune Llama 3.3 70B in under 48GB of VRAM with Unsloth! GGUFs: unsloth/Llama-3.3-70B-Instruct-GGUFBnB 4bit: unsloth/Llama-3.3-70B-Instruct-bnb-4bit16bit: unsloth/Llama-3.3-70B-Instruct See translation 1 reply · 👍 4 4 🤗 4 4 + Reply
view post Post 1436 Vision finetuning is in 🦥Unsloth! You can now finetune Llama 3.2, Qwen2 VL, Pixtral and all Llava variants up to 2x faster and with up to 70% less VRAM usage! Colab to finetune Llama 3.2: https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing 1 reply · 🔥 9 9 + Reply