--- license: mit base_model: warp-ai/wuerstchen-prior datasets: - dongOi071102/meme-pretreatment-dataset tags: - wuerstchen - text-to-image - diffusers - diffusers-training - lora inference: true --- # LoRA Finetuning - dongOi071102/wuerstchen-prior-meme-lora-4 This pipeline was finetuned from **warp-ai/wuerstchen-prior** on the **dongOi071102/meme-pretreatment-dataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['a catton cat with angry face']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = AutoPipelineForText2Image.from_pretrained( "warp-ai/wuerstchen", torch_dtype=float32 ) # load lora weights from folder: pipeline.prior_pipe.load_lora_weights("dongOi071102/wuerstchen-prior-meme-lora-4", torch_dtype=float32) image = pipeline(prompt=prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * LoRA rank: 4 * Epochs: 100 * Learning rate: 0.0001 * Batch size: 8 * Gradient accumulation steps: 1 * Image resolution: 512 * Mixed-precision: fp16 More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/2111818-no/text2image-fine-tune/runs/xmsqasc9).