Metadata-Version: 2.1 Name: diffusers Version: 0.30.0.dev0 Summary: State-of-the-art diffusion in PyTorch and JAX. Home-page: https://github.com/huggingface/diffusers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/diffusers/graphs/contributors) Author-email: diffusers@huggingface.co License: Apache 2.0 License Keywords: deep learning diffusion jax pytorch stable diffusion audioldm Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Education Classifier: Intended Audience :: Science/Research Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: OS Independent Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Requires-Python: >=3.8.0 Description-Content-Type: text/markdown License-File: LICENSE Requires-Dist: importlib_metadata Requires-Dist: filelock Requires-Dist: huggingface-hub>=0.23.2 Requires-Dist: numpy Requires-Dist: regex!=2019.12.17 Requires-Dist: requests Requires-Dist: safetensors>=0.3.1 Requires-Dist: Pillow Provides-Extra: quality Requires-Dist: urllib3<=2.0.0; extra == "quality" Requires-Dist: isort>=5.5.4; extra == "quality" Requires-Dist: ruff==0.1.5; extra == "quality" Requires-Dist: hf-doc-builder>=0.3.0; extra == "quality" Provides-Extra: docs Requires-Dist: hf-doc-builder>=0.3.0; extra == "docs" Provides-Extra: training Requires-Dist: accelerate>=0.31.0; extra == "training" Requires-Dist: datasets; extra == "training" Requires-Dist: protobuf<4,>=3.20.3; extra == "training" Requires-Dist: tensorboard; extra == "training" Requires-Dist: Jinja2; extra == "training" Requires-Dist: peft>=0.6.0; extra == "training" Provides-Extra: test Requires-Dist: compel==0.1.8; extra == "test" Requires-Dist: GitPython<3.1.19; extra == "test" Requires-Dist: datasets; extra == "test" Requires-Dist: Jinja2; extra == "test" Requires-Dist: invisible-watermark>=0.2.0; extra == "test" Requires-Dist: k-diffusion>=0.0.12; extra == "test" Requires-Dist: librosa; extra == "test" Requires-Dist: parameterized; extra == "test" Requires-Dist: pytest; extra == "test" Requires-Dist: pytest-timeout; extra == "test" Requires-Dist: pytest-xdist; extra == "test" Requires-Dist: requests-mock==1.10.0; extra == "test" Requires-Dist: safetensors>=0.3.1; extra == "test" Requires-Dist: sentencepiece!=0.1.92,>=0.1.91; extra == "test" Requires-Dist: scipy; extra == "test" Requires-Dist: torchvision; extra == "test" Requires-Dist: transformers>=4.41.2; extra == "test" Provides-Extra: torch Requires-Dist: torch>=1.4; extra == "torch" Requires-Dist: accelerate>=0.31.0; extra == "torch" Provides-Extra: flax Requires-Dist: jax>=0.4.1; extra == "flax" Requires-Dist: jaxlib>=0.4.1; extra == "flax" Requires-Dist: flax>=0.4.1; extra == "flax" Provides-Extra: dev Requires-Dist: urllib3<=2.0.0; extra == "dev" Requires-Dist: isort>=5.5.4; extra == "dev" Requires-Dist: ruff==0.1.5; extra == "dev" Requires-Dist: hf-doc-builder>=0.3.0; extra == "dev" Requires-Dist: compel==0.1.8; extra == "dev" Requires-Dist: GitPython<3.1.19; extra == "dev" Requires-Dist: datasets; extra == "dev" Requires-Dist: Jinja2; extra == "dev" Requires-Dist: invisible-watermark>=0.2.0; extra == "dev" Requires-Dist: k-diffusion>=0.0.12; extra == "dev" Requires-Dist: librosa; extra == "dev" Requires-Dist: parameterized; extra == "dev" Requires-Dist: pytest; extra == "dev" Requires-Dist: pytest-timeout; extra == "dev" Requires-Dist: pytest-xdist; extra == "dev" Requires-Dist: requests-mock==1.10.0; extra == "dev" Requires-Dist: safetensors>=0.3.1; extra == "dev" Requires-Dist: sentencepiece!=0.1.92,>=0.1.91; extra == "dev" Requires-Dist: scipy; extra == "dev" Requires-Dist: torchvision; extra == "dev" Requires-Dist: transformers>=4.41.2; extra == "dev" Requires-Dist: accelerate>=0.31.0; extra == "dev" Requires-Dist: datasets; extra == "dev" Requires-Dist: protobuf<4,>=3.20.3; extra == "dev" Requires-Dist: tensorboard; extra == "dev" Requires-Dist: Jinja2; extra == "dev" Requires-Dist: peft>=0.6.0; extra == "dev" Requires-Dist: hf-doc-builder>=0.3.0; extra == "dev" Requires-Dist: torch>=1.4; extra == "dev" Requires-Dist: accelerate>=0.31.0; extra == "dev" Requires-Dist: jax>=0.4.1; extra == "dev" Requires-Dist: jaxlib>=0.4.1; extra == "dev" Requires-Dist: flax>=0.4.1; extra == "dev"
๐ค Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, ๐ค Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](https://huggingface.co/docs/diffusers/conceptual/philosophy#usability-over-performance), [simple over easy](https://huggingface.co/docs/diffusers/conceptual/philosophy#simple-over-easy), and [customizability over abstractions](https://huggingface.co/docs/diffusers/conceptual/philosophy#tweakable-contributorfriendly-over-abstraction). ๐ค Diffusers offers three core components: - State-of-the-art [diffusion pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) that can be run in inference with just a few lines of code. - Interchangeable noise [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview) for different diffusion speeds and output quality. - Pretrained [models](https://huggingface.co/docs/diffusers/api/models/overview) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. ## Installation We recommend installing ๐ค Diffusers in a virtual environment from PyPI or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/) and [Flax](https://flax.readthedocs.io/en/latest/#installation), please refer to their official documentation. ### PyTorch With `pip` (official package): ```bash pip install --upgrade diffusers[torch] ``` With `conda` (maintained by the community): ```sh conda install -c conda-forge diffusers ``` ### Flax With `pip` (official package): ```bash pip install --upgrade diffusers[flax] ``` ### Apple Silicon (M1/M2) support Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggingface.co/docs/diffusers/optimization/mps) guide. ## Quickstart Generating outputs is super easy with ๐ค Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 30,000+ checkpoints): ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) pipeline.to("cuda") pipeline("An image of a squirrel in Picasso style").images[0] ``` You can also dig into the models and schedulers toolbox to build your own diffusion system: ```python from diffusers import DDPMScheduler, UNet2DModel from PIL import Image import torch scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda") scheduler.set_timesteps(50) sample_size = model.config.sample_size noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") input = noise for t in scheduler.timesteps: with torch.no_grad(): noisy_residual = model(input, t).sample prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample input = prev_noisy_sample image = (input / 2 + 0.5).clamp(0, 1) image = image.cpu().permute(0, 2, 3, 1).numpy()[0] image = Image.fromarray((image * 255).round().astype("uint8")) image ``` Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to launch your diffusion journey today! ## How to navigate the documentation | **Documentation** | **What can I learn?** | |---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview) | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. | | [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. | | [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. | | [Optimization](https://huggingface.co/docs/diffusers/optimization/opt_overview) | Guides for how to optimize your diffusion model to run faster and consume less memory. | | [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. | ## Contribution We โค๏ธ contributions from the open-source community! If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md). You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library. - See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute - See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines - See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) Also, say ๐ in our public Discord channel . We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or just hang out โ. ## Popular Tasks & Pipelines
Task | Pipeline | ๐ค Hub |
---|---|---|
Unconditional Image Generation | DDPM | google/ddpm-ema-church-256 |
Text-to-Image | Stable Diffusion Text-to-Image | runwayml/stable-diffusion-v1-5 |
Text-to-Image | unCLIP | kakaobrain/karlo-v1-alpha |
Text-to-Image | DeepFloyd IF | DeepFloyd/IF-I-XL-v1.0 |
Text-to-Image | Kandinsky | kandinsky-community/kandinsky-2-2-decoder |
Text-guided Image-to-Image | ControlNet | lllyasviel/sd-controlnet-canny |
Text-guided Image-to-Image | InstructPix2Pix | timbrooks/instruct-pix2pix |
Text-guided Image-to-Image | Stable Diffusion Image-to-Image | runwayml/stable-diffusion-v1-5 |
Text-guided Image Inpainting | Stable Diffusion Inpainting | runwayml/stable-diffusion-inpainting |
Image Variation | Stable Diffusion Image Variation | lambdalabs/sd-image-variations-diffusers |
Super Resolution | Stable Diffusion Upscale | stabilityai/stable-diffusion-x4-upscaler |
Super Resolution | Stable Diffusion Latent Upscale | stabilityai/sd-x2-latent-upscaler |