Edit model card

flux-training

This is a LyCORIS adapter derived from black-forest-labs/FLUX.1-schnell.

The main validation prompt used during training was:

A figurine of a character with green hair, wearing a white shirt, a black vest, and a gray cap, sitting with one hand on their knee and the other hand making a peace sign. The character is wearing a blue pendant and has a gold bracelet. In the background, there are green plants and a tree branch.

Validation settings

  • CFG: 3.0
  • CFG Rescale: 0.0
  • Steps: 20
  • Sampler: None
  • Seed: 42
  • Resolution: 1024x1024

Note: The validation settings are not necessarily the same as the training settings.

You can find some example images in the following gallery:

Prompt
unconditional (blank prompt)
Negative Prompt
'
Prompt
A woman dons a yellow outfit with checkered flag pattern accents, basking in the sunlight with people nearby. Emblazoned on the left side of her shirt is the word "PIRELLI", while the right side reads "FACTORY".
Negative Prompt
'
Prompt
The image depicts a dark green, textured trash can adorned with a bright sticker on a cobblestone street. The sticker reads "DRUNK DRIVER TARGET," suggesting the trash can is intended for use by inebriated individuals to prevent littering or accidents.
Negative Prompt
'
Prompt
A moment during a track relay race where one runner is passing the baton to his teammate; spectators can be seen in the stands, cheering.
Negative Prompt
'
Prompt
An anime character is racing down the street while wearing cat ears, a red dress, black shoes, and wearing aviator style sunglasses.
Negative Prompt
'
Prompt
A figurine of a character with green hair, wearing a white shirt, a black vest, and a gray cap, sitting with one hand on their knee and the other hand making a peace sign. The character is wearing a blue pendant and has a gold bracelet. In the background, there are green plants and a tree branch.
Negative Prompt
'

The text encoder was not trained. You may reuse the base model text encoder for inference.

Training settings

  • Training epochs: 0
  • Training steps: 158000
  • Learning rate: 5e-06
  • Effective batch size: 5
    • Micro-batch size: 1
    • Gradient accumulation steps: 1
    • Number of GPUs: 5
  • Prediction type: flow-matching
  • Rescaled betas zero SNR: False
  • Optimizer: adamw_bf16
  • Precision: Pure BF16
  • Quantised: Yes: int8-quanto
  • Xformers: Not used
  • LyCORIS Config:
{
    "algo": "lokr",
    "multiplier": 1.0,
    "linear_dim": 1000000,
    "linear_alpha": 1,
    "factor": 2,
    "full_matrix": true,
    "apply_preset": {
        "name_algo_map": {
            "transformer_blocks.[0-7]*": {
                "algo": "lokr",
                "factor": 4,
                "linear_dim": 1000000,
                "linear_alpha": 1,
                "full_matrix": true
            },
            "transformer_blocks.[8-15]*": {
                "algo": "lokr",
                "factor": 5,
                "linear_dim": 1000000,
                "linear_alpha": 1,
                "full_matrix": true
            },
            "transformer_blocks.[16-18]*": {
                "algo": "lokr",
                "factor": 10,
                "linear_dim": 1000000,
                "linear_alpha": 1,
                "full_matrix": true
            },
            "single_transformer_blocks.[0-15]*": {
                "algo": "lokr",
                "factor": 8,
                "linear_dim": 1000000,
                "linear_alpha": 1,
                "full_matrix": true
            },
            "single_transformer_blocks.[16-23]*": {
                "algo": "lokr",
                "factor": 5,
                "linear_dim": 1000000,
                "linear_alpha": 1,
                "full_matrix": true
            },
            "single_transformer_blocks.[24-37]*": {
                "algo": "lokr",
                "factor": 4,
                "linear_dim": 1000000,
                "linear_alpha": 1,
                "use_scalar": true,
                "full_matrix": true
            }
        },
        "use_fnmatch": true
    }
}

Datasets

default_dataset_arb

  • Repeats: 0
  • Total number of images: ~39950
  • Total number of aspect buckets: 6
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: closest

default_dataset_arb2

  • Repeats: 0
  • Total number of images: ~40005
  • Total number of aspect buckets: 6
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: closest

default_dataset_arb3

  • Repeats: 0
  • Total number of images: ~64080
  • Total number of aspect buckets: 6
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: closest

default_dataset_arb4

  • Repeats: 0
  • Total number of images: ~70340
  • Total number of aspect buckets: 6
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: closest

default_dataset_arb5

  • Repeats: 0
  • Total number of images: ~31790
  • Total number of aspect buckets: 6
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: closest

default_dataset_arb6

  • Repeats: 0
  • Total number of images: ~22035
  • Total number of aspect buckets: 5
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: closest

default_dataset_arb7

  • Repeats: 0
  • Total number of images: ~39390
  • Total number of aspect buckets: 6
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: closest

default_dataset

  • Repeats: 0
  • Total number of images: ~121070
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: square

default_dataset2

  • Repeats: 0
  • Total number of images: ~123100
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: square

default_dataset3

  • Repeats: 0
  • Total number of images: ~64355
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: square

default_dataset4

  • Repeats: 0
  • Total number of images: ~69970
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: square

default_dataset5

  • Repeats: 0
  • Total number of images: ~32300
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: square

default_dataset6

  • Repeats: 0
  • Total number of images: ~22190
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: square

default_dataset7

  • Repeats: 0
  • Total number of images: ~39560
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: random
  • Crop aspect: square

Inference

import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights

model_id = 'black-forest-labs/FLUX.1-schnell'
adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer)
wrapper.merge_to()

prompt = "A figurine of a character with green hair, wearing a white shirt, a black vest, and a gray cap, sitting with one hand on their knee and the other hand making a peace sign. The character is wearing a blue pendant and has a gold bracelet. In the background, there are green plants and a tree branch."

pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
    prompt=prompt,
    num_inference_steps=20,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
    width=1024,
    height=1024,
    guidance_scale=3.0,
).images[0]
image.save("output.png", format="PNG")
Downloads last month
23
Inference API
Examples

Model tree for jimmycarter/flux-training

Adapter
(156)
this model