--- title: README emoji: 🌍 colorFrom: indigo colorTo: indigo sdk: static pinned: false --- # ZeroGPU Spaces *ZeroGPU* is a new kind of hardware for Spaces. It has two goals : - Provide **free GPU access** for Spaces - Allow Spaces to run on **multiple GPUs** This is achieved by making Spaces efficiently hold and release GPUs as needed (as opposed to a classical GPU Space that holds exactly one GPU at any point in time) ZeroGPU uses _Nvidia A10G_ GPU devices under the hood # Compatibility *ZeroGPU* Spaces should mostly be compatible with any PyTorch-based GPU Space.
Compatibilty with high level HF libraries like `transformers` or `diffusers` is slightly more guaranteed
That said, ZeroGPU Spaces are not as broadly compatible as classical GPU Spaces and you might still encounter unexpected bugs Also, for now, ZeroGPU Spaces only works with the **Gradio SDK** Supported versions: - Gradio: 4+ - PyTorch: [`2.0.0`, `2.0.1`] - Pyton: `3.10.11` # Usage In order to make your Space work with ZeroGPU you need to **decorate** the Python functions that actually require a GPU with `@spaces.GPU`
During the time when a decorated function is invoked, the Space will be attributed a GPU, and it will release it upon completion of the function.
Here is a practical example : ```diff +import spaces from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained(...) pipe.to('cuda') +@spaces.GPU def generate(prompt): return pipe(prompt).images gr.Interface( fn=generate, inputs=gr.Text(), outputs=gr.Gallery(), ).launch() ``` 1. We first `import spaces` (importing it first might prevent some issues but is not mandatory) 2. Then we decorate the `generate` function by adding a `@spaces.GPU` line before its definition Note that `@spaces.GPU` is effect-free and can be safely used on non-ZeroGPU environments # Early access Feel free to join this organization if you want to try ZeroGPU as a Space author. ✋ We should accept you shortly after checking your HF profile