AkashicPulse v1.0

AkashicPulse is a finetune based on RouWei, an Illustrious-based model.

The model has gone through 1 step of merging, and 3 steps of finetuning to make sure the model able to give stunning results, superior from the competitions.

Recommended settings:

  • Sampling: Euler a

  • Steps: 20-30, the sweet spot is 28.

  • CFG: 4-10, the sweet spot is 7.

  • [Not mandatory] On reForge or ComfyUI, have MaHiRo CFG enabled.

Recommended prompting format:

  • Prompt: [1girl/1boy], [character name], [series], by [artist name], [the rest of the prompt], masterpiece, best quality

  • Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, signature, watermark, username, blurry, [the rest of the negative prompt]

Training Process:

  • Step 1:

    • Giving RouWei a CyberFix treatment.
  • Step 2:

    • Training new concept

      • Dataset size: ~10.000 images

      • GPU: 2xA100 80GB

      • Optimizer: AdaFactor

      • Unet Learning Rate: 7.5e-6

      • Text Encoder Learning Rate: 3.75e-6

      • Batch Size: 16

      • Gradient Accumulation: 3

      • Warmup steps: 2 * 100 steps

      • Min SNR: 5

      • Epoch: 10

      • Random Cropping: True

      • Loss: Huber

      • Huber Schedule: SNR

  • Step 3:

    • Finetuning I

      • Dataset size: ~4.500 images

      • GPU: 1xA100 80GB

      • Optimizer: AdaFactor

      • Unet Learning Rate: 3e-6

      • Text Encoder Learning Rate: N/A

      • Batch Size: 16

      • Gradient Accumulation: 3

      • Warmup steps: 5%

      • Min SNR: 5

      • Epoch: 15

      • Random Cropping: True

      • Loss: Huber

      • Huber Schedule: SNR

      • Multires Noise Iteration: 8

  • Step 4:

    • Finetuning II

      • Dataset size: ~4.500 images

      • GPU: 1xA100 80GB

      • Optimizer: AdaFactor

      • Unet Learning Rate: 3e-6

      • Text Encoder Learning Rate: N/A

      • Batch Size: 48

      • Gradient Accumulation: 1

      • Warmup steps: 5%

      • Min SNR: 5

      • Epoch: 15

      • Loss: L2

      • Noise Offset: 0.0357

Added series:

  • DanDaDan

The model falls under Fair AI Public License 1.0-SD with no additional terms.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for kayfahaarukku/AkashicPulse-v1.0

Finetuned
Minthy/RouWei-0.6
Finetuned
(3)
this model
Finetunes
2 models
Merges
1 model

Space using kayfahaarukku/AkashicPulse-v1.0 1

Collection including kayfahaarukku/AkashicPulse-v1.0