Karthika Rajagopal R S

KarthikaRajagopal
·

AI & ML interests

NLP, reinforcement learning, Generative AI

Recent Activity

Organizations

Stanford AI's profile picture AI FILMS's profile picture MusicAI's profile picture Open-Source AI Meetup's profile picture lora concepts library's profile picture Keras Dreambooth Event's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture LocalLLaMA's profile picture MLX Community's profile picture Paris AI Running Club's profile picture Stable Diffusion Community (Unofficial, Non-profit)'s profile picture

KarthikaRajagopal's activity

reacted to sayakpaul's post with ❤️❤️ 6 days ago
view post
Post
2582
It's been a while we shipped native quantization support in diffusers 🧨

We currently support bistandbytes as the official backend but using others like torchao is already very simple.

This post is just a reminder of what's possible:

1. Loading a model with a quantization config
2. Saving a model with quantization config
3. Loading a pre-quantized model
4. enable_model_cpu_offload()
5. Training and loading LoRAs into quantized checkpoints

Docs:
https://huggingface.co/docs/diffusers/main/en/quantization/bitsandbytes
  • 1 reply
·