Text-to-Image
Diffusers
diffusers-training
lora
flux
flux-diffusers
template:sd-lora
davidberenstein1957's picture
Update README.md
c102521 verified
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
widget:
- text: >-
a bustling manga street, devoid of vehicles, detailed with vibrant colors
and dynamic line work, characters in the background adding life and
movement, under a soft golden hour light, with rich textures and a lively
atmosphere, high resolution, sharp focus
output:
url: images/example_v9pjueoq1.png
- text: >-
a boat in the canals of Venice, painted in gouache with soft, flowing
brushstrokes and vibrant, translucent colors, capturing the serene
reflection on the water under a misty ambiance, with rich textures and a
dynamic perspective
output:
url: images/example_jx5b3cugc.png
- text: >-
A vibrant orange poppy flower, enclosed in an ornate golden frame, against a
black backdrop, rendered in anime style with bold outlines, exaggerated
details, and a dramatic chiaroscuro lighting.
output:
url: images/example_tphrlr123.png
- text: >-
Armored armadillo, detailed anatomy, precise shading, labeled diagram,
cross-section, high resolution.
output:
url: images/example_5cml5u298.png
- text: A photographic photo of a hedgehog in a forest 4k
output:
url: images/example_9tr56cjcn.png
- text: >-
Grainy shot of a robot cooking in the kitchen, with soft shadows and
nostalgic film texture.
output:
url: images/example_brq7cz6kd.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
datasets:
- data-is-better-together/image-preferences-results-binarized
- data-is-better-together/open-image-preferences-v1-results
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - data-is-better-together/image-preferences-flux-dev-lora
<Gallery />
## Model description
These are davidberenstein1957/image-preferences-flux-schnell-lora DreamBooth LoRA weights for black-forest-labs/FLUX.1-schnell.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `["Cinematic", "Photographic", "Anime", "Manga", "Digital art", "Pixel art", "Fantasy art", "Neonpunk", "3D Model", “Painting”, “Animation” “Illustration”]` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](davidberenstein1957/image-preferences-flux-schnell-dev/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('davidberenstein1957/image-preferences-flux-dev-lora', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('["Cinematic", "Photographic", "Anime", "Manga", "Digital art", "Pixel art", "Fantasy art", "Neonpunk", "3D Model", “Painting”, “Animation” “Illustration”]').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]