Core ML Converted Model:
- This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
- Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
split_einsum
version is compatible with all compute unit options including Neural Engine.original
version is only compatible withCPU & GPU
option.- Custom resolution versions are tagged accordingly.
- The
vae-ft-mse-840000-ema-pruned.ckpt
VAE is embedded into the model. - This model was converted with a
vae-encoder
for use withimage2image
. - This model is
fp16
. - Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in
CoreML
format. - This model does not have the unet split into chunks.
- This model does not include a
safety checker
(for NSFW content).
lofi-V2:
Source(s): CivitAI
L.O.F.I: Limitless Originality Free from Interference
🧐 No special face alignment
🚀 Improve line details
🚀 Improve prompt understanding
Based on the LOFI-v1 model, finetuned 80,000 steps / 300 epochs
📷more camera concept
🎨exact palette
Prompt suggestions
Since the text-encoder used is enough trained version, do not use a very high attention to control weight, which will cause some wrong drawing, it is recommended that all attention weights be no higher than 1.2
If there is no special composition requirement, there is no need for a lot of negative prompts, such as "missing hands", which may affect human body drawing, and DeepNegative is enough for negative prompts
It is strongly recommended to use hires.fix to generate, Recommended parameters:
- Final output: 512*768
- Steps: 20, Sampler: Euler a, CFG scale: 7
- Size: 256x384, Denoising strength: 0.75
- Hires upscale: 2, Hires steps: 40
- Hires upscaler: Latent (bicubic antialiased)
Most of the sample images are generated with Hires. fix
Note: that if you use Hires. fix, you may not be able to reproduce the image with the same set of parameters in WebUI, because Hires. fix introduces double randomness.