metadata
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: >-
a 3D model of an orange robot with yellow eyes against a white background.
DummyCars
output:
url: samples/1729592878029__000001000_0.jpg
- text: >-
a man driving a car with a robot in the passenger seat. The car is
surrounded by trees and the sky is visible at the top of the image.
DummyCars
output:
url: samples/1729592894794__000001000_1.jpg
- text: >-
a man in an orange suit riding a red motorcycle on a road surrounded by
grass, a wall, and a building in the background. DummyCars
output:
url: samples/1729592911564__000001000_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: DummyCars
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
dummy
Model trained with AI Toolkit by Ostris
Trigger words
You should use DummyCars
to trigger the image generation.
Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('life/dummy', weight_name='dummy.safetensors')
image = pipeline('a 3D model of an orange robot with yellow eyes against a white background. DummyCars').images[0]
image.save("my_image.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers