noobai-XL-Vpred-0.5 / README.md
Eugeoter's picture
Update README.md
f7524df verified
|
raw
history blame
10.9 kB
metadata
license: other
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
  - en
base_model:
  - Laxhar/noobai-XL_v1.0
pipeline_tag: text-to-image
tags:
  - safetensors
  - diffusers
  - stable-diffusion
  - stable-diffusion-xl
  - art
library_name: diffusers

NoobAI XL V-Pred 0.5

Model Introduction

This image generation model, based on Laxhar/noobai-XL_v1.0, leverages full Danbooru and e621 datasets with native tags and natural language captioning.

Implemented as a v-prediction model (distinct from eps-prediction), it requires specific parameter configurations - detailed in following sections.

Special thanks to my teammate euge for the coding work, and we're grateful for the technical support from many helpful community members.

⚠️ IMPORTANT NOTICE ⚠️

THIS MODEL WORKS DIFFERENT FROM EPS MODELS!

PLEASE READ THE GUIDE CAREFULLY!

Model Details

  • Developed by: Laxhar Lab
  • Model Type: Diffusion-based text-to-image generative model
  • Fine-tuned from: Laxhar/noobai-XL_v1.0
  • Sponsored by from: Lanyun Cloud

How to Use the Model.

Method I: reForge

  1. Install reForge by following the instructions in the repository;
  2. Switch to dev_upstream_experimental branch by running git checkout dev_upstream_experimental;
  3. Launch reForge WebUI;
  4. Find "Advanced Model Sampling for Forge" accordion at the bottom of the "txt2img" tab;
  5. Enable "Enable Advanced Model Sampling";
  6. Select "v_prediction" in the "Discrete Sampling Type" checkbox group.
  7. Generate images!

Method II: ComfyUI

SAMLPLE with NODES

comfy_ui_workflow_sample

Method III: WebUI

Note that dev branch is not stable and may contain bugs.

  1. (If you haven't installed WebUI) Clone the repository:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
  1. Switch to dev branch:
git switch dev
  1. Pull latest updates:
git pull
  1. Launch WebUI and use the model as usual.

Method IV: Diffusers

import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerDiscreteScheduler

ckpt_path = "/path/to/model.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(
    ckpt_path,
    use_safetensors=True,
    torch_dtype=torch.float16,
)
scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True}
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args)
pipe.enable_xformers_memory_efficient_attention()
pipe = pipe.to("cuda")

prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme,  gritty, graphite \(medium\)"""
negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro"

image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=832,
    height=1216,
    num_inference_steps=28,
    guidance_scale=5,
    generator=torch.Generator().manual_seed(42),
).images[0]

image.save("output.png")

Note: Please make sure Git is installed and environment is properly configured on your machine.


Recommended Settings

Parameters

  • CFG: 4 ~ 5
  • Steps: 28 ~ 35
  • Sampling Method: Euler (⚠️ Other samplers will not work properly)
  • Resolution: Total area around 1024x1024. Best to choose from: 768x1344, 832x1216, 896x1152, 1024x1024, 1152x896, 1216x832, 1344x768

Prompts

  • Prompt Prefix:
masterpiece, best quality, newest, absurdres, highres, safe,
  • Negative Prompt:
nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro

Usage Guidelines

Caption

<1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>, <other tags>

Quality Tags

For quality tags, we evaluated image popularity through the following process:

  • Data normalization based on various sources and ratings.
  • Application of time-based decay coefficients according to date recency.
  • Ranking of images within the entire dataset based on this processing.

Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years.

Percentile Range Quality Tags
> 95th masterpiece
> 85th, <= 95th best quality
> 60th, <= 85th good quality
> 30th, <= 60th normal quality
<= 30th worst quality

Aesthetic Tags

Tag Description
very awa Top 5% of images in terms of aesthetic score by waifu-scorer
worst aesthetic All the bottom 5% of images in terms of aesthetic score by waifu-scorer and aesthetic-shadow-v2
... ...

Date Tags

There are two types of date tags: year tags and period tags. For year tags, use year xxxx format, i.e., year 2021. For period tags, please refer to the following table:

Year Range Period tag
2005-2010 old
2011-2014 early
2014-2017 mid
2018-2020 recent
2021-2024 newest

Dataset

  • The latest Danbooru images up to the training date (approximately before 2024-10-23)
  • E621 images e621-2024-webp-4Mpixel dataset on Hugging Face

Communication

Model License

This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license.

I. Usage Restrictions

  • Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation.
  • Prohibited generation of unethical or offensive content.
  • Prohibited violation of laws and regulations in the user's jurisdiction.

II. Commercial Prohibition

We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products.

III. Open Source Community

To foster a thriving open-source community,users MUST comply with the following requirements:

  • Open source derivative models, merged models, LoRAs, and products based on the above models.
  • Share work details such as synthesis formulas, prompts, and workflows.
  • Follow the fair-ai-public-license to ensure derivative works remain open source.

IV. Disclaimer

Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage.

Participants and Contributors

Participants

Contributors