The Pokeball Machine
The Pokeball Machine is a Dreambooth model for the pokeball
concept (represented by the pkblz
identifier).
It applies to the wildcard theme.
It is fine-tuned from CompVis/stable-diffusion-v1-4
checkpoint on a small dataset of pokeball images (i.e., images of the red-white original pokeball).
It can be used by modifying the instance_prompt
: a pkblz ball in the middle of a miniature jungle
This model was created as part of the DreamBooth Hackathon 🔥. Visit the organisation page for instructions on how to take part!
Fine-Tuning Details
- Number of training images: 31
- Learning rate: 2e-06
- Training steps: 800
- Guidance Scale: 10
- Inference Steps: 50-75
Output Examples
a blueprint photo of a pkblz ball | a photo of a cybernetic pkblz ball, wide shot | a photo of a pkblz ball in the style vintage disney |
a photo of a mosaic pkblz ball lying in an antique temple | a photo of a detailed ornate pkblz ball | a pkblz ball underwater |
a pkblz ball in the middle of a miniature jungle | a pkblz ball underwater | a mystic pkblz ball, trending on artstation |
a pkblz ball underwater, trending on artstation | a wooden pkblz ball | a pkblz ball hovering over a pond |
a pkblz ball on a sunny tropical beach | a steampunk pkblz ball, trending on artstation | a colored pencil sketch of a pkblz ball |
a photo of a spectral ornate pkblz ball, trending on artstation, realistic | a sunset photo of a pkblz ball | a watercolor photo of a pkblz ball |
Usage
from diffusers import StableDiffusionPipeline
import torch
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pipeline = StableDiffusionPipeline.from_pretrained('simonschoe/pokeball-machine').to(device)
prompt = "a pkblz ball in the middle of a miniature jungle"
image = pipeline(
prompt,
num_inference_steps=50,
guidance_scale=10,
num_images_per_prompt=1
).images[0]
image
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.