--- title: README emoji: 🌍 colorFrom: gray colorTo: purple sdk: static pinned: false ---
PrunaAI
---- # 🌍 Join the Pruna AI community! [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/rskEr4BZJx) [![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/) (Open-Source lauch of [Pruna AI](https://github.com/PrunaAI) is on March 20th, 2025 πŸ™Š [Munich event](https://lu.ma/xlmd455g) & [Paris event](https://lu.ma/xsm2j7h9) πŸ‡©πŸ‡ͺπŸ‡«πŸ‡·πŸ‡ͺπŸ‡ΊπŸŒ) ---- # πŸ’œ Simply make AI models faster, cheaper, smaller, greener! [Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, greener with the `pruna` package. - It supports **various models including CV, NLP, audio, graphs for predictive and generative AI**. - It supports **various hardware including GPU, CPU, Edge**. - It supports **various compression algortihms including quantization, pruning, distillation, caching, recovery, compilation** that can be **combined together**. - You can either **play on your own** with smash/compression configurations or **let the smashing/compressing agent** find the optimal configuration **[Pro]**. - You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models. You can set it up in minutes and compress your first models in few lines of code! ---- # ⏩ How to get started? You can smash your own models by installing pruna with: ``` pip install pruna[gpu]==0.1.3 --extra-index-url https://prunaai.pythonanywhere.com/ ``` You can start wiht simple notebooks to expereince efficiency gains with: | Use Case | Free Notebooks | |------------------------------------------------------------|----------------------------------------------------------------| | **3x Faster Stable Diffusion Models** | ⏩ [Smash for free](https://colab.research.google.com/drive/1BZm6NtCsF2mBV4UYlRlqpTIpTmQgR0iQ?usp=sharing) | | **Turbocharge Stable Diffusion Video Generation** | ⏩ [Smash for free](https://colab.research.google.com/drive/1m1wvGdXi-qND-2ys0zqAaMFZ9DbMd5jW?usp=sharing) | | **Making your LLMs 4x smaller** | ⏩ [Smash for free](https://colab.research.google.com/drive/1jQgwhmoPz80qRf5NdRJcY_pAr7Oj5Ftv?usp=sharing) | | **Blazingly fast Computer Vision Models** | ⏩ [Smash for free](https://colab.research.google.com/drive/1GkzxTQW-2yCKXc8omE6Sa4SxiETMi8yC?usp=sharing) | | **Smash your model with a CPU only** | ⏩ [Smash for free](https://colab.research.google.com/drive/19iLNVSgbx_IoCgduXPhqKq7rCoxegnZO?usp=sharing) | | **Transcribe 2 hours of audio in less than 2 minutes with Whisper** | ⏩ [Smash for free](https://colab.research.google.com/drive/1dc6fb8_GD8eshznthBSpGpRu4WPW7xuZ?usp=sharing) | | **100% faster Whisper Transcription** | ⏩ [Smash for free](https://colab.research.google.com/drive/1kCJ4-xmo7y8VS6smzaV0207A5rONHPXu?usp=sharing) | | **Flux generation in a heartbeat, literally** | ⏩ [Smash for free](https://colab.research.google.com/drive/18_iG0UXhD7OQR_CxSSsKFC8TLDsRw_9m?usp=sharing) | | **Run your Flux model without an A100** | ⏩ [Smash for free](https://colab.research.google.com/drive/1i1iSITNgiOpschV-Nu5mfX-effwYV9sn?usp=sharing) | For more details about installation and tutorials, you can check the [Pruna AI documentation](https://docs.pruna.ai/en/latest/setup/pip.html). ----