metadata
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A person in a bustling cafe monogatari style
output:
url: samples/1727740519836__000001700_0.jpg
- text: A girl with purple hair loooking at bluish green sky
output:
url: samples/1727740624634__000001700_1.jpg
- text: A girl with a cap and headphones, smiling at the camera
output:
url: samples/1727740729501__000001700_2.jpg
- text: a girl with purple hair, portrait, background elements, monogatari style
output:
url: images/example_tnizfwreg.png
- text: >-
girl with yellow hair looking at the moon, railway crossing in the
background, with traffic lights, highly detailed backgroundmonogatari
style
output:
url: images/example_5mbrwcv0a.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: monogatari style
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
monogatari-style
Model trained with AI Toolkit by Ostris
Trigger words
You should use monogatari style
to trigger the image generation.
Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('jayavibhav/monogatari-style', weight_name='monogatari-style')
image = pipeline('A person in a bustling cafe monogatari style').images[0]
image.save("my_image.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers