--- pipeline_tag: text-to-image widget: - text: >- movie scene screencap, cinematic footage. thanos smelling a little yellow rose. extreme wide angle, output: url: 1man.png - text: 'A tiny robot taking a break under a tree in the garden ' output: url: robot.png - text: mystery output: url: mystery.png - text: a cat wearing sunglasses in the summer output: url: cat.png - text: 'robot holding a sign that says ’a storm is coming’ ' output: url: storm.png - text: The Exegenesis of the soul, captured within a boundless well of starlight, pulsating and vibrating wisps, chiaroscuro, humming transformer output: url: soul.png - text: >- Lady of War, chique dark clothes, vinyl, imposing pose, anime style, 90s natural photography of a man, glasses, cinematic, output: url: anime.png - text: natural photography of a man, glasses, cinematic, output: url: glasses.png - text: if I could turn back time output: url: time.png --- ### "Constructive Deconstruction: Domain-Agnostic Debiasing of Diffusion Models" ## introduction: | Constructive Deconstruction is a groundbreaking approach to debiasing diffusion models used in generative tasks like image synthesis. This method significantly enhances the quality and fidelity of generated images across various domains by removing biases inherited from the training data. Our technique involves overtraining the model to a controlled noisy state, applying nightshading, and using bucketing techniques to realign the model's internal representations. ## methodology: - overtraining_to_controlled_noisy_state: | By purposely overtraining the model until it predictably fails, we create a controlled noisy state. This state helps in identifying and addressing the inherent biases in the model's training data. - nightshading: | Nightshading is repurposed to induce a controlled failure, making it easier to retrain the model. This involves injecting carefully selected data points to stress the model and cause predictable failures. - bucketing: | Using mathematical techniques like slerp (Spherical Linear Interpolation) and bislerp (Bilinear Interpolation), we merge the induced noise back into the model. This step highlights the model's learned knowledge while suppressing biases. - retraining_and_fine_tuning: | The noisy state is retrained on a large, diverse dataset to create a new base model called "Mobius." Initial issues such as grainy details and inconsistent colors are resolved during fine-tuning, resulting in high-quality, unbiased outputs. ## results_and_highlights: increased_diversity_of_outputs: | Training the model on high-quality data naturally increases the diversity of the generated outputs without intentionally loosening associations. This leads to improved generalization and variety in generated images. enhanced_quality: | The fine-tuning process eliminates initial issues, leading to clear, consistent, and high-quality image outputs. versatility_across_styles: | The Mobius model exhibits exceptional performance across various art styles and domains, ensuring the model can handle a wide range of artistic expressions with precision and creativity. ## conclusion: to be determined. ## Usage and Recommendations - Requires a CLIP skip of -3 - highly suggested to preappenmed watermark to all negatives and keep negatives simple such as "watermark" or "worst, watermark" This model supports and encourages experimentation with various tags, offering users the freedom to explore their creative visions in depth. ## License refer to the file named License in this repo.