diff --git a/.gitattributes b/.gitattributes index a6344aac8c09253b3b630fb776ae94478aa0275b..37a1056b4ff6ed030a2bcc2602699ea31a92021d 100644 --- a/.gitattributes +++ b/.gitattributes @@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text *.zip filter=lfs diff=lfs merge=lfs -text *.zst filter=lfs diff=lfs merge=lfs -text *tfevents* filter=lfs diff=lfs merge=lfs -text +assets/repo_figures/Picture1.jpg filter=lfs diff=lfs merge=lfs -text +assets/repo_figures/Picture4.jpg filter=lfs diff=lfs merge=lfs -text +assets/repo_figures/Picture5.jpg filter=lfs diff=lfs merge=lfs -text +assets/repo_figures/Picture7.jpg filter=lfs diff=lfs merge=lfs -text +assets/repo_figures/Picture8.jpg filter=lfs diff=lfs merge=lfs -text +src/examples/source/art.jpg filter=lfs diff=lfs merge=lfs -text diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64 --- /dev/null +++ b/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/README.md b/README.md index 5dbba8001bba549602b92293b00b98686e8b7826..d558fc1cc4848821ed0d0db6c44635e1ab6809af 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,196 @@ ---- -title: RF Solver Edit -emoji: 📚 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 5.7.1 -app_file: app.py -pinned: false -license: mit -short_description: Using FLUX for image editing! ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference +
+ +# Taming Rectified Flow for Inversion and Editing + +[Jiangshan Wang](https://scholar.google.com/citations?user=HoKoCv0AAAAJ&hl=en)1,2, [Junfu Pu](https://pujunfu.github.io/)2, [Zhongang Qi](https://scholar.google.com/citations?hl=en&user=zJvrrusAAAAJ&view_op=list_works&sortby=pubdate)2, [Jiayi Guo](https://www.jiayiguo.net)1, [Yue Ma](https://mayuelala.github.io/)3,
[Nisha Huang](https://scholar.google.com/citations?user=wTmPkSsAAAAJ&hl=en)1, [Yuxin Chen](https://scholar.google.com/citations?hl=en&user=dEm4OKAAAAAJ)2, [Xiu Li](https://scholar.google.com/citations?user=Xrh1OIUAAAAJ&hl=en&oi=ao)1, [Ying Shan](https://scholar.google.com/citations?hl=en&user=4oXBp9UAAAAJ&view_op=list_works&sortby=pubdate)2 + +1 Tsinghua University, 2 Tencent ARC Lab, 3 HKUST + +[![arXiv](https://img.shields.io/badge/arXiv-RFSolverEdit-b31b1b.svg)](https://arxiv.org/abs/2411.04746) + + +
+ + + + + +

+We propose RF-Solver to solve the rectified flow ODE with less error, thus enhancing both sampling quality and inversion-reconstruction accuracy for rectified-flow-based generative models. Furthermore, we propose RF-Edit to leverage the RF-Solver for image and video editing tasks. Our methods achieve impressive performance on various tasks, including text-to-image generation, image/video inversion, and image/video editing. +

+ + + +

+ +

+ +# 🔥 News +- [2024.11.18] More examples for style transfer are available! +- [2024.11.18] Gradio Demo for image editing is available! +- [2024.11.11] The homepage of the project is available! +- [2024.11.08] Code for image editing is released! +- [2024.11.08] Paper released! + +# 👨‍💻 ToDo +- ☑️ Release the gradio demo +- ☑️ Release scripts to for more image editing cases +- ☐ Release the code for video editing + + +# 📖 Method +## RF-Solver +

+ +We derive the exact formulation of the solution for Rectified Flow ODE. The non-linear part in this solution is processed by Taylor Expansion. Through higher order expansion, the approximation error in the solution is significantly reduced, thus achieving impressive performance on both text-to-image sampling and image/video inversion. +

+ +## RF-Edit +

+ +Based on RF-Solver, we further propose the RF-Edit for image and video editing. RF-Edit framework leverages the features from inversion in the denoising process, which enables high-quality editing while preserving the structual information of source image/video. RF-Edit contains two sub-modules, espectively for image editing and video editing. +

+ +# 🛠️ Code Setup +The environment of our code is the same as FLUX, you can refer to the [official repo](https://github.com/black-forest-labs/flux/tree/main) of FLUX, or running the following command to construct the environment. +``` +conda create --name RF-Solver-Edit python=3.10 +conda activate RF-Solver-Edit +pip install -e ".[all]" +``` +# 🚀 Examples for Image Editing +We have provided several scripts to reproduce the results in the paper, mainly including 3 types of editing: Stylization, Adding, Replacing. We suggest to run the experiment on a single A100 GPU. + +## Stylization + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Ref Style
Editing ScriptsTrump Marilyn MonroeEinstein
Edtied image
Editing ScriptsBidenBatmanHerry Potter
Edtied image
+ +## Adding & Replacing + + + + + + + + + + + + + + + + + + + + +
Source image
Editing Scripts+ hiking stickhorse -> camel+ dog
Edtied image
+ + +# 🪄 Edit Your Own Image + +## Gradio Demo +We privide the gradio demo for image editing. Run the following command: +``` +cd src +python gradio_demo.py +``` +Here is an example for using the gradio demo to edit an image! Note that here "Number of inject steps" means the steps of feature sharing in RF-Edit, which is highly related to the quality of edited results. We suggest to tune this parameter, selecting the results with best visual quality. +
+ +
+ + +## Command Line +You can also run the following scripts to edit your own image. +``` +cd src +python edit.py --source_prompt [describe the content of your image or leaves it as null] \ + --target_prompt [describe your editing requirements] \ + --guidance 2 \ + --source_img_dir [the path of your source image] \ + --num_steps 30 \ + --inject [typically set to a number between 2 to 8] \ + --name 'flux-dev' --offload \ + --output_dir [output path] +``` +Similarly, The ```--inject``` refers to the steps of feature sharing in RF-Edit, which is highly related to the performance of editing. + + + +# 🖼️ Gallery +## Inversion and Reconstruction + +

+ +

+ +## Image Stylization + +

+ +

+ +## Image Editing + +

+ +

+ +## Video Editing + +

+ +

+ +# 🖋️ Citation + +If you find our work helpful, please **star 🌟** this repo and **cite 📑** our paper. Thanks for your support! + +``` +@article{wang2024taming, + title={Taming Rectified Flow for Inversion and Editing}, + author={Wang, Jiangshan and Pu, Junfu and Qi, Zhongang and Guo, Jiayi and Ma, Yue and Huang, Nisha and Chen, Yuxin and Li, Xiu and Shan, Ying}, + journal={arXiv preprint arXiv:2411.04746}, + year={2024} +} +``` + +# Acknowledgements +We thank [FLUX](https://github.com/black-forest-labs/flux/tree/main) for their clean codebase. + +# Contact +The code in this repository is still being reorganized. Errors that may arise during the organizing process could lead to code malfunctions or discrepancies from the original research results. If you have any questions or concerns, please send email to wjs23@mails.tsinghua.edu.cn. \ No newline at end of file diff --git a/assets/repo_figures/Picture1.jpg b/assets/repo_figures/Picture1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..858d2a05945dafafefbc1a2a61f4839a34cbfd7d --- /dev/null +++ b/assets/repo_figures/Picture1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6d3f78993781daa9affdcbd3d34dc8a553d5b401617257b7bccb9dc1f77ebcb +size 1557766 diff --git a/assets/repo_figures/Picture2.jpg b/assets/repo_figures/Picture2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..96d1cdb248075a41e3306b04c205c76d0ee37632 Binary files /dev/null and b/assets/repo_figures/Picture2.jpg differ diff --git a/assets/repo_figures/Picture3.jpg b/assets/repo_figures/Picture3.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0d5d48c6dc8f25deacf199e261f1edeeb0cce53c Binary files /dev/null and b/assets/repo_figures/Picture3.jpg differ diff --git a/assets/repo_figures/Picture4.jpg b/assets/repo_figures/Picture4.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a783ab417cd09784bf4c1020f0751545ba7db052 --- /dev/null +++ b/assets/repo_figures/Picture4.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14bdf24d506cb83372f706406aac1a5d629f1cbf2ff09c86d068002743706163 +size 3213315 diff --git a/assets/repo_figures/Picture5.jpg b/assets/repo_figures/Picture5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..891ab482dc3e9e9879ab8d9eb4020197cf48324d --- /dev/null +++ b/assets/repo_figures/Picture5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a2583bf83d475f2d752d8e06cc5ad5623fba99c7f5fb617d4896819ac274612 +size 3258696 diff --git a/assets/repo_figures/Picture6.jpg b/assets/repo_figures/Picture6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..282e9a57ac8f4553f270a7c9cd670fdc3b761eb6 Binary files /dev/null and b/assets/repo_figures/Picture6.jpg differ diff --git a/assets/repo_figures/Picture7.jpg b/assets/repo_figures/Picture7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b5634e8974174bee885b7d31fe3f6b6e83eef87b --- /dev/null +++ b/assets/repo_figures/Picture7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c4398b4465c16c3c5e91391f975b4aa2a14c7154b14f2b22a94d1d8fc5aa7b5 +size 2953523 diff --git a/assets/repo_figures/Picture8.jpg b/assets/repo_figures/Picture8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..bb8294a8b7fb905b81548d9886c8aa43ae224e56 --- /dev/null +++ b/assets/repo_figures/Picture8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6a271153eb4d185f5e49b5cfad8a2d1d642aac44ce9f7fe698b3d443f0b00a4 +size 5658970 diff --git a/assets/repo_figures/examples/edit/art_batman.jpg b/assets/repo_figures/examples/edit/art_batman.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b081d5b4488b9e519b0d4a42581d8314e925da1f Binary files /dev/null and b/assets/repo_figures/examples/edit/art_batman.jpg differ diff --git a/assets/repo_figures/examples/edit/art_mari.jpg b/assets/repo_figures/examples/edit/art_mari.jpg new file mode 100644 index 0000000000000000000000000000000000000000..89f03fd0f76698a0df3938f0ba9addb623a1fb0a Binary files /dev/null and b/assets/repo_figures/examples/edit/art_mari.jpg differ diff --git a/assets/repo_figures/examples/edit/boy.jpg b/assets/repo_figures/examples/edit/boy.jpg new file mode 100644 index 0000000000000000000000000000000000000000..81a13fb422c79e39214528d5971bba6481b7cfba Binary files /dev/null and b/assets/repo_figures/examples/edit/boy.jpg differ diff --git a/assets/repo_figures/examples/edit/cartoon_ein.jpg b/assets/repo_figures/examples/edit/cartoon_ein.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0fdeca220b2cd0a941d8ee952eb0a3a79bda44c9 Binary files /dev/null and b/assets/repo_figures/examples/edit/cartoon_ein.jpg differ diff --git a/assets/repo_figures/examples/edit/cartoon_herry.jpg b/assets/repo_figures/examples/edit/cartoon_herry.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8a88e90bf7e5a5a3941591dc4ecaf41e13e5fc33 Binary files /dev/null and b/assets/repo_figures/examples/edit/cartoon_herry.jpg differ diff --git a/assets/repo_figures/examples/edit/hiking.jpg b/assets/repo_figures/examples/edit/hiking.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5b481e3519a076c6f7e5268d048dfccc56cf7f5d Binary files /dev/null and b/assets/repo_figures/examples/edit/hiking.jpg differ diff --git a/assets/repo_figures/examples/edit/horse.jpg b/assets/repo_figures/examples/edit/horse.jpg new file mode 100644 index 0000000000000000000000000000000000000000..bdb67e642688b919773dbef35f8bd5f4de17a6f5 Binary files /dev/null and b/assets/repo_figures/examples/edit/horse.jpg differ diff --git a/assets/repo_figures/examples/edit/nobel_Biden.jpg b/assets/repo_figures/examples/edit/nobel_Biden.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0ac58bda8144ef9cbcdaced3903573f44defe87d Binary files /dev/null and b/assets/repo_figures/examples/edit/nobel_Biden.jpg differ diff --git a/assets/repo_figures/examples/edit/nobel_Trump.jpg b/assets/repo_figures/examples/edit/nobel_Trump.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ef64ef9e3c526bbd8fb3d0ed6f6dabd58cf618a4 Binary files /dev/null and b/assets/repo_figures/examples/edit/nobel_Trump.jpg differ diff --git a/assets/repo_figures/examples/source/art.jpg b/assets/repo_figures/examples/source/art.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7e6f7a2e3fd69e73bd3ddce97f93be9af4bcb718 Binary files /dev/null and b/assets/repo_figures/examples/source/art.jpg differ diff --git a/assets/repo_figures/examples/source/boy.jpg b/assets/repo_figures/examples/source/boy.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f218d16216e2d55487e81eaa92b3ef1fdaca9dd4 Binary files /dev/null and b/assets/repo_figures/examples/source/boy.jpg differ diff --git a/assets/repo_figures/examples/source/cartoon.jpg b/assets/repo_figures/examples/source/cartoon.jpg new file mode 100644 index 0000000000000000000000000000000000000000..50856ec3fbc40d826a8f26702bc152092bd59158 Binary files /dev/null and b/assets/repo_figures/examples/source/cartoon.jpg differ diff --git a/assets/repo_figures/examples/source/hiking.jpg b/assets/repo_figures/examples/source/hiking.jpg new file mode 100644 index 0000000000000000000000000000000000000000..aa03d5ccdb71d960c953108fdb19ae10af44c8b6 Binary files /dev/null and b/assets/repo_figures/examples/source/hiking.jpg differ diff --git a/assets/repo_figures/examples/source/horse.jpg b/assets/repo_figures/examples/source/horse.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6eb0009cce69dbb1e5123e4a05622c22efb0ef6a Binary files /dev/null and b/assets/repo_figures/examples/source/horse.jpg differ diff --git a/assets/repo_figures/examples/source/nobel.jpg b/assets/repo_figures/examples/source/nobel.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7e8c305e22f7cb25c8dcdf45628af67bf3244b6d Binary files /dev/null and b/assets/repo_figures/examples/source/nobel.jpg differ diff --git a/model_cards/FLUX.1-dev.md b/model_cards/FLUX.1-dev.md new file mode 100644 index 0000000000000000000000000000000000000000..a8d6d8e1766b4383d35c783b4dbc52102193951c --- /dev/null +++ b/model_cards/FLUX.1-dev.md @@ -0,0 +1,46 @@ +![FLUX.1 [dev] Grid](../assets/dev_grid.jpg) + +`FLUX.1 [dev]` is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. +For more information, please read our [blog post](https://blackforestlabs.ai/announcing-black-forest-labs/). + +# Key Features +1. Cutting-edge output quality, second only to our state-of-the-art model `FLUX.1 [pro]`. +2. Competitive prompt following, matching the performance of closed source alternatives. +3. Trained using guidance distillation, making `FLUX.1 [dev]` more efficient. +4. Open weights to drive new scientific research, and empower artists to develop innovative workflows. +5. Generated outputs can be used for personal, scientific, and commercial purposes, as described in the [flux-1-dev-non-commercial-license](./licence.md). + +# Usage +We provide a reference implementation of `FLUX.1 [dev]`, as well as sampling code, in a dedicated [github repository](https://github.com/black-forest-labs/flux). +Developers and creatives looking to build on top of `FLUX.1 [dev]` are encouraged to use this as a starting point. + +## API Endpoints +The FLUX.1 models are also available via API from the following sources +1. [bfl.ml](https://docs.bfl.ml/) (currently `FLUX.1 [pro]`) +2. [replicate.com](https://replicate.com/collections/flux) +3. [fal.ai](https://fal.ai/models/fal-ai/flux/dev) + +## ComfyUI +`FLUX.1 [dev]` is also available in [Comfy UI](https://github.com/comfyanonymous/ComfyUI) for local inference with a node-based workflow. + +--- +# Limitations +- This model is not intended or able to provide factual information. +- As a statistical model this checkpoint might amplify existing societal biases. +- The model may fail to generate output that matches the prompts. +- Prompt following is heavily influenced by the prompting-style. + +# Out-of-Scope Use +The model and its derivatives may not be used + +- In any way that violates any applicable national, federal, state, local or international law or regulation. +- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content. +- To generate or disseminate verifiably false information and/or content with the purpose of harming others. +- To generate or disseminate personal identifiable information that can be used to harm an individual. +- To harass, abuse, threaten, stalk, or bully individuals or groups of individuals. +- To create non-consensual nudity or illegal pornographic content. +- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation. +- Generating or facilitating large-scale disinformation campaigns. + +# License +This model falls under the [`FLUX.1 [dev]` Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). diff --git a/model_cards/FLUX.1-schnell.md b/model_cards/FLUX.1-schnell.md new file mode 100644 index 0000000000000000000000000000000000000000..4694d82131b52b9830f3a16c0c76b3a9c1905427 --- /dev/null +++ b/model_cards/FLUX.1-schnell.md @@ -0,0 +1,41 @@ +![FLUX.1 [schnell] Grid](../assets/schnell_grid.jpg) + +`FLUX.1 [schnell]` is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. +For more information, please read our [blog post](https://blackforestlabs.ai/announcing-black-forest-labs/). + +# Key Features +1. Cutting-edge output quality and competitive prompt following, matching the performance of closed source alternatives. +2. Trained using latent adversarial diffusion distillation, `FLUX.1 [schnell]` can generate high-quality images in only 1 to 4 steps. +3. Released under the `apache-2.0` licence, the model can be used for personal, scientific, and commercial purposes. + +# Usage +We provide a reference implementation of `FLUX.1 [schnell]`, as well as sampling code, in a dedicated [github repository](https://github.com/black-forest-labs/flux). +Developers and creatives looking to build on top of `FLUX.1 [schnell]` are encouraged to use this as a starting point. + +## API Endpoints +The FLUX.1 models are also available via API from the following sources +1. [bfl.ml](https://docs.bfl.ml/) (currently `FLUX.1 [pro]`) +2. [replicate.com](https://replicate.com/collections/flux) +3. [fal.ai](https://fal.ai/models/fal-ai/flux/schnell) + +## ComfyUI +`FLUX.1 [schnell]` is also available in [Comfy UI](https://github.com/comfyanonymous/ComfyUI) for local inference with a node-based workflow. + +--- +# Limitations +- This model is not intended or able to provide factual information. +- As a statistical model this checkpoint might amplify existing societal biases. +- The model may fail to generate output that matches the prompts. +- Prompt following is heavily influenced by the prompting-style. + +# Out-of-Scope Use +The model and its derivatives may not be used + +- In any way that violates any applicable national, federal, state, local or international law or regulation. +- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content. +- To generate or disseminate verifiably false information and/or content with the purpose of harming others. +- To generate or disseminate personal identifiable information that can be used to harm an individual. +- To harass, abuse, threaten, stalk, or bully individuals or groups of individuals. +- To create non-consensual nudity or illegal pornographic content. +- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation. +- Generating or facilitating large-scale disinformation campaigns. diff --git a/model_licenses/LICENSE-FLUX1-dev b/model_licenses/LICENSE-FLUX1-dev new file mode 100644 index 0000000000000000000000000000000000000000..d91cf0bcef46f7ab49551034ccf3bea6b765f8d6 --- /dev/null +++ b/model_licenses/LICENSE-FLUX1-dev @@ -0,0 +1,42 @@ +FLUX.1 [dev] Non-Commercial License +Black Forest Labs, Inc. (“we” or “our” or “Company”) is pleased to make available the weights, parameters and inference code for the FLUX.1 [dev] Model (as defined below) freely available for your non-commercial and non-production use as set forth in this FLUX.1 [dev] Non-Commercial License (“License”). The “FLUX.1 [dev] Model” means the FLUX.1 [dev] text-to-image AI model and its elements which includes algorithms, software, checkpoints, parameters, source code (inference code, evaluation code, and if applicable, fine-tuning code) and any other materials associated with the FLUX.1 [dev] AI model made available by Company under this License, including if any, the technical documentation, manuals and instructions for the use and operation thereof (collectively, “FLUX.1 [dev] Model”). +By downloading, accessing, use, Distributing (as defined below), or creating a Derivative (as defined below) of the FLUX.1 [dev] Model, you agree to the terms of this License. If you do not agree to this License, then you do not have any rights to access, use, Distribute or create a Derivative of the FLUX.1 [dev] Model and you must immediately cease using the FLUX.1 [dev] Model. If you are agreeing to be bound by the terms of this License on behalf of your employer or other entity, you represent and warrant to us that you have full legal authority to bind your employer or such entity to this License. If you do not have the requisite authority, you may not accept the License or access the FLUX.1 [dev] Model on behalf of your employer or other entity. + 1. Definitions. Capitalized terms used in this License but not defined herein have the following meanings: + a. “Derivative” means any (i) modified version of the FLUX.1 [dev] Model (including but not limited to any customized or fine-tuned version thereof), (ii) work based on the FLUX.1 [dev] Model, or (iii) any other derivative work thereof. For the avoidance of doubt, Outputs are not considered Derivatives under this License. + b. “Distribution” or “Distribute” or “Distributing” means providing or making available, by any means, a copy of the FLUX.1 [dev] Models and/or the Derivatives as the case may be. + c. “Non-Commercial Purpose” means any of the following uses, but only so far as you do not receive any direct or indirect payment arising from the use of the model or its output: (i) personal use for research, experiment, and testing for the benefit of public knowledge, personal study, private entertainment, hobby projects, or otherwise not directly or indirectly connected to any commercial activities, business operations, or employment responsibilities; (ii) use by commercial or for-profit entities for testing, evaluation, or non-commercial research and development in a non-production environment, (iii) use by any charitable organization for charitable purposes, or for testing or evaluation. For clarity, use for revenue-generating activity or direct interactions with or impacts on end users, or use to train, fine tune or distill other models for commercial use is not a Non-Commercial purpose. + d. “Outputs” means any content generated by the operation of the FLUX.1 [dev] Models or the Derivatives from a prompt (i.e., text instructions) provided by users. For the avoidance of doubt, Outputs do not include any components of a FLUX.1 [dev] Models, such as any fine-tuned versions of the FLUX.1 [dev] Models, the weights, or parameters. + e. “you” or “your” means the individual or entity entering into this License with Company. + 2. License Grant. + a. License. Subject to your compliance with this License, Company grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license to access, use, create Derivatives of, and Distribute the FLUX.1 [dev] Models solely for your Non-Commercial Purposes. The foregoing license is personal to you, and you may not assign or sublicense this License or any other rights or obligations under this License without Company’s prior written consent; any such assignment or sublicense will be void and will automatically and immediately terminate this License. Any restrictions set forth herein in regarding the FLUX.1 [dev] Model also applies to any Derivative you create or that are created on your behalf. + b. Non-Commercial Use Only. You may only access, use, Distribute, or creative Derivatives of or the FLUX.1 [dev] Model or Derivatives for Non-Commercial Purposes. If You want to use a FLUX.1 [dev] Model a Derivative for any purpose that is not expressly authorized under this License, such as for a commercial activity, you must request a license from Company, which Company may grant to you in Company’s sole discretion and which additional use may be subject to a fee, royalty or other revenue share. Please contact Company at the following e-mail address if you want to discuss such a license: info@blackforestlabs.ai. + c. Reserved Rights. The grant of rights expressly set forth in this License are the complete grant of rights to you in the FLUX.1 [dev] Model, and no other licenses are granted, whether by waiver, estoppel, implication, equity or otherwise. Company and its licensors reserve all rights not expressly granted by this License. + d. Outputs. We claim no ownership rights in and to the Outputs. You are solely responsible for the Outputs you generate and their subsequent uses in accordance with this License. You may use Output for any purpose (including for commercial purposes), except as expressly prohibited herein. You may not use the Output to train, fine-tune or distill a model that is competitive with the FLUX.1 [dev] Model. + 3. Distribution. Subject to this License, you may Distribute copies of the FLUX.1 [dev] Model and/or Derivatives made by you, under the following conditions: + a. you must make available a copy of this License to third-party recipients of the FLUX.1 [dev] Models and/or Derivatives you Distribute, and specify that any rights to use the FLUX.1 [dev] Models and/or Derivatives shall be directly granted by Company to said third-party recipients pursuant to this License; + b. you must make prominently display the following notice alongside the Distribution of the FLUX.1 [dev] Model or Derivative (such as via a “Notice” text file distributed as part of such FLUX.1 [dev] Model or Derivative) (the “Attribution Notice”): +“The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc. +IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.” + c. in the case of Distribution of Derivatives made by you, you must also include in the Attribution Notice a statement that you have modified the applicable FLUX.1 [dev] Model; and + d. in the case of Distribution of Derivatives made by you, any terms and conditions you impose on any third-party recipients relating to Derivatives made by or for you shall neither limit such third-party recipients’ use of the FLUX.1 [dev] Model or any Derivatives made by or for Company in accordance with this License nor conflict with any of its terms and conditions. + e. In the case of Distribution of Derivatives made by you, you must not misrepresent or imply, through any means, that the Derivatives made by or for you and/or any modified version of the FLUX.1 [dev] Model you Distribute under your name and responsibility is an official product of the Company or has been endorsed, approved or validated by the Company, unless you are authorized by Company to do so in writing. + 4. Restrictions. You will not, and will not permit, assist or cause any third party to + a. use, modify, copy, reproduce, create Derivatives of, or Distribute the FLUX.1 [dev] Model (or any Derivative thereof, or any data produced by the FLUX.1 [dev] Model), in whole or in part, for (i) any commercial or production purposes, (ii) military purposes, (iii) purposes of surveillance, including any research or development relating to surveillance, (iv) biometric processing, (v) in any manner that infringes, misappropriates, or otherwise violates any third-party rights, or (vi) in any manner that violates any applicable law and violating any privacy or security laws, rules, regulations, directives, or governmental requirements (including the General Data Privacy Regulation (Regulation (EU) 2016/679), the California Consumer Privacy Act, and any and all laws governing the processing of biometric information), as well as all amendments and successor laws to any of the foregoing; + b. alter or remove copyright and other proprietary notices which appear on or in any portion of the FLUX.1 [dev] Model; + c. utilize any equipment, device, software, or other means to circumvent or remove any security or protection used by Company in connection with the FLUX.1 [dev] Model, or to circumvent or remove any usage restrictions, or to enable functionality disabled by FLUX.1 [dev] Model; or + d. offer or impose any terms on the FLUX.1 [dev] Model that alter, restrict, or are inconsistent with the terms of this License. + e. violate any applicable U.S. and non-U.S. export control and trade sanctions laws (“Export Laws”) in connection with your use or Distribution of any FLUX.1 [dev] Model; + f. directly or indirectly Distribute, export, or otherwise transfer FLUX.1 [dev] Model (a) to any individual, entity, or country prohibited by Export Laws; (b) to anyone on U.S. or non-U.S. government restricted parties lists; or (c) for any purpose prohibited by Export Laws, including nuclear, chemical or biological weapons, or missile technology applications; 3) use or download FLUX.1 [dev] Model if you or they are (a) located in a comprehensively sanctioned jurisdiction, (b) currently listed on any U.S. or non-U.S. restricted parties list, or (c) for any purpose prohibited by Export Laws; and (4) will not disguise your location through IP proxying or other methods. + 5. DISCLAIMERS. THE FLUX.1 [dev] MODEL IS PROVIDED “AS IS” AND “WITH ALL FAULTS” WITH NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. COMPANY EXPRESSLY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS OR IMPLIED, WHETHER BY STATUTE, CUSTOM, USAGE OR OTHERWISE AS TO ANY MATTERS RELATED TO THE FLUX.1 [dev] MODEL, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, SATISFACTORY QUALITY, OR NON-INFRINGEMENT. COMPANY MAKES NO WARRANTIES OR REPRESENTATIONS THAT THE FLUX.1 [dev] MODEL WILL BE ERROR FREE OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS, OR PRODUCE ANY PARTICULAR RESULTS. + 6. LIMITATION OF LIABILITY. TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL COMPANY BE LIABLE TO YOU OR YOUR EMPLOYEES, AFFILIATES, USERS, OFFICERS OR DIRECTORS (A) UNDER ANY THEORY OF LIABILITY, WHETHER BASED IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY, WARRANTY, OR OTHERWISE UNDER THIS LICENSE, OR (B) FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, PUNITIVE OR SPECIAL DAMAGES OR LOST PROFITS, EVEN IF COMPANY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE FLUX.1 [dev] MODEL, ITS CONSTITUENT COMPONENTS, AND ANY OUTPUT (COLLECTIVELY, “MODEL MATERIALS”) ARE NOT DESIGNED OR INTENDED FOR USE IN ANY APPLICATION OR SITUATION WHERE FAILURE OR FAULT OF THE MODEL MATERIALS COULD REASONABLY BE ANTICIPATED TO LEAD TO SERIOUS INJURY OF ANY PERSON, INCLUDING POTENTIAL DISCRIMINATION OR VIOLATION OF AN INDIVIDUAL’S PRIVACY RIGHTS, OR TO SEVERE PHYSICAL, PROPERTY, OR ENVIRONMENTAL DAMAGE (EACH, A “HIGH-RISK USE”). IF YOU ELECT TO USE ANY OF THE MODEL MATERIALS FOR A HIGH-RISK USE, YOU DO SO AT YOUR OWN RISK. YOU AGREE TO DESIGN AND IMPLEMENT APPROPRIATE DECISION-MAKING AND RISK-MITIGATION PROCEDURES AND POLICIES IN CONNECTION WITH A HIGH-RISK USE SUCH THAT EVEN IF THERE IS A FAILURE OR FAULT IN ANY OF THE MODEL MATERIALS, THE SAFETY OF PERSONS OR PROPERTY AFFECTED BY THE ACTIVITY STAYS AT A LEVEL THAT IS REASONABLE, APPROPRIATE, AND LAWFUL FOR THE FIELD OF THE HIGH-RISK USE. + 7. INDEMNIFICATION + +You will indemnify, defend and hold harmless Company and our subsidiaries and affiliates, and each of our respective shareholders, directors, officers, employees, agents, successors, and assigns (collectively, the “Company Parties”) from and against any losses, liabilities, damages, fines, penalties, and expenses (including reasonable attorneys’ fees) incurred by any Company Party in connection with any claim, demand, allegation, lawsuit, proceeding, or investigation (collectively, “Claims”) arising out of or related to (a) your access to or use of the FLUX.1 [dev] Model (as well as any Output, results or data generated from such access or use), including any High-Risk Use (defined below); (b) your violation of this License; or (c) your violation, misappropriation or infringement of any rights of another (including intellectual property or other proprietary rights and privacy rights). You will promptly notify the Company Parties of any such Claims, and cooperate with Company Parties in defending such Claims. You will also grant the Company Parties sole control of the defense or settlement, at Company’s sole option, of any Claims. This indemnity is in addition to, and not in lieu of, any other indemnities or remedies set forth in a written agreement between you and Company or the other Company Parties. + 8. Termination; Survival. + a. This License will automatically terminate upon any breach by you of the terms of this License. + b. We may terminate this License, in whole or in part, at any time upon notice (including electronic) to you. + c. If You initiate any legal action or proceedings against Company or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging that the FLUX.1 [dev] Model or any Derivative, or any part thereof, infringe upon intellectual property or other rights owned or licensable by you, then any licenses granted to you under this License will immediately terminate as of the date such legal action or claim is filed or initiated. + d. Upon termination of this License, you must cease all use, access or Distribution of the FLUX.1 [dev] Model and any Derivatives. The following sections survive termination of this License 2(c), 2(d), 4-11. + 9. Third Party Materials. The FLUX.1 [dev] Model may contain third-party software or other components (including free and open source software) (all of the foregoing, “Third Party Materials”), which are subject to the license terms of the respective third-party licensors. Your dealings or correspondence with third parties and your use of or interaction with any Third Party Materials are solely between you and the third party. Company does not control or endorse, and makes no representations or warranties regarding, any Third Party Materials, and your access to and use of such Third Party Materials are at your own risk. + 10. Trademarks. You have not been granted any trademark license as part of this License and may not use any name or mark associated with Company without the prior written permission of Company, except to the extent necessary to make the reference required in the Attribution Notice as specified above or as is reasonably necessary in describing the FLUX.1 [dev] Model and its creators. + 11. General. This License will be governed and construed under the laws of the State of Delaware without regard to conflicts of law provisions. If any provision or part of a provision of this License is unlawful, void or unenforceable, that provision or part of the provision is deemed severed from this License, and will not affect the validity and enforceability of any remaining provisions. The failure of Company to exercise or enforce any right or provision of this License will not operate as a waiver of such right or provision. This License does not confer any third-party beneficiary rights upon any other person or entity. This License, together with the Documentation, contains the entire understanding between you and Company regarding the subject matter of this License, and supersedes all other written or oral agreements and understandings between you and Company regarding such subject matter. No change or addition to any provision of this License will be binding unless it is in writing and signed by an authorized representative of both you and Company. \ No newline at end of file diff --git a/model_licenses/LICENSE-FLUX1-schnell b/model_licenses/LICENSE-FLUX1-schnell new file mode 100644 index 0000000000000000000000000000000000000000..263e72a4a315b23a3cf29ed43dda8204459c4da3 --- /dev/null +++ b/model_licenses/LICENSE-FLUX1-schnell @@ -0,0 +1,54 @@ + + +Apache License +Version 2.0, January 2004 +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + +"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. + +"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. + +"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. + +"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. + +"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. + +"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. + +"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). + +"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. + +"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." + +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: + + You must give any other recipients of the Work or Derivative Works a copy of this License; and + You must cause any modified files to carry prominent notices stating that You changed the files; and + You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and + If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. + +You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 0000000000000000000000000000000000000000..604e7fd5de595ebf05b2812173756c996d8c62fc --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,97 @@ +[project] +name = "RF-Solver-Editing" +authors = [ + { name = "Jiangshan Wang, Junfu Pu, et al.", email = "wjs23@mails.tsinghua.edu.cn" }, +] +description = "Inference codebase for RF-Solver-Editing" +readme = "README.md" +requires-python = ">=3.10" +license = { file = "LICENSE.md" } +dynamic = ["version"] +dependencies = [ + "torch >= 2.0.0", + "torchvision", + "einops", + "fire >= 0.6.0", + "huggingface-hub", + "safetensors", + "sentencepiece", + "transformers", + "tokenizers", + "protobuf", + "requests", + "invisible-watermark", +] + +[project.optional-dependencies] +streamlit = [ + "streamlit", + "streamlit-keyup", +] +gradio = [ + "gradio", +] +all = [ + "flux[streamlit]", + "flux[gradio]", +] + +[project.scripts] +flux = "flux.edit:main" + +[build-system] +build-backend = "setuptools.build_meta" +requires = ["setuptools>=64", "wheel", "setuptools_scm>=8"] + +[tool.ruff] +line-length = 110 +target-version = "py310" +extend-exclude = ["/usr/lib/*"] + +[tool.ruff.lint] +ignore = [ + "E501", # line too long - will be fixed in format +] + +[tool.ruff.format] +quote-style = "double" +indent-style = "space" +line-ending = "auto" +skip-magic-trailing-comma = false +docstring-code-format = true +exclude = [ + "src/flux/_version.py", # generated by setuptools_scm +] + +[tool.ruff.lint.isort] +combine-as-imports = true +force-wrap-aliases = true +known-local-folder = ["src"] +known-first-party = ["flux"] + +[tool.pyright] +include = ["src"] +exclude = [ + "**/__pycache__", # cache directories + "./typings", # generated type stubs +] +stubPath = "./typings" + +[tool.tomlsort] +in_place = true +no_sort_tables = true +spaces_before_inline_comment = 1 +spaces_indent_inline_array = 2 +trailing_comma_inline_array = true +sort_first = [ + "project", + "build-system", + "tool.setuptools", +] + +# needs to be last for CI reasons +[tool.setuptools_scm] +write_to = "src/flux/_version.py" +parentdir_prefix_version = "flux-" +fallback_version = "0.0.0" +version_scheme = "post-release" diff --git a/setup.py b/setup.py new file mode 100644 index 0000000000000000000000000000000000000000..b908cbe55cb344569d32de1dfc10ca7323828dc5 --- /dev/null +++ b/setup.py @@ -0,0 +1,3 @@ +import setuptools + +setuptools.setup() diff --git a/src/edit.py b/src/edit.py new file mode 100644 index 0000000000000000000000000000000000000000..c28151305fbfa32661a2efec437f4cfba638b324 --- /dev/null +++ b/src/edit.py @@ -0,0 +1,248 @@ +import os +import re +import time +from dataclasses import dataclass +from glob import iglob +import argparse +import torch +from einops import rearrange +from fire import Fire +from PIL import ExifTags, Image + +from flux.sampling import denoise, get_schedule, prepare, unpack +from flux.util import (configs, embed_watermark, load_ae, load_clip, + load_flow_model, load_t5) +from transformers import pipeline +from PIL import Image +import numpy as np + +import os + +NSFW_THRESHOLD = 0.85 + +@dataclass +class SamplingOptions: + source_prompt: str + target_prompt: str + # prompt: str + width: int + height: int + num_steps: int + guidance: float + seed: int | None + +@torch.inference_mode() +def encode(init_image, torch_device, ae): + init_image = torch.from_numpy(init_image).permute(2, 0, 1).float() / 127.5 - 1 + init_image = init_image.unsqueeze(0) + init_image = init_image.to(torch_device) + init_image = ae.encode(init_image.to()).to(torch.bfloat16) + return init_image + +@torch.inference_mode() +def main( + args, + seed: int | None = None, + device: str = "cuda" if torch.cuda.is_available() else "cpu", + num_steps: int | None = None, + loop: bool = False, + offload: bool = False, + add_sampling_metadata: bool = True, +): + """ + Sample the flux model. Either interactively (set `--loop`) or run for a + single image. + + Args: + name: Name of the model to load + height: height of the sample in pixels (should be a multiple of 16) + width: width of the sample in pixels (should be a multiple of 16) + seed: Set a seed for sampling + output_name: where to save the output image, `{idx}` will be replaced + by the index of the sample + prompt: Prompt used for sampling + device: Pytorch device + num_steps: number of sampling steps (default 4 for schnell, 50 for guidance distilled) + loop: start an interactive session and sample multiple times + guidance: guidance value used for guidance distillation + add_sampling_metadata: Add the prompt to the image Exif metadata + """ + torch.set_grad_enabled(False) + name = args.name + source_prompt = args.source_prompt + target_prompt = args.target_prompt + guidance = args.guidance + output_dir = args.output_dir + num_steps = args.num_steps + offload = args.offload + + nsfw_classifier = pipeline("image-classification", model="Falconsai/nsfw_image_detection", device=device) + + if name not in configs: + available = ", ".join(configs.keys()) + raise ValueError(f"Got unknown model name: {name}, chose from {available}") + + torch_device = torch.device(device) + if num_steps is None: + num_steps = 4 if name == "flux-schnell" else 25 + + # init all components + t5 = load_t5(torch_device, max_length=256 if name == "flux-schnell" else 512) + clip = load_clip(torch_device) + model = load_flow_model(name, device="cpu" if offload else torch_device) + ae = load_ae(name, device="cpu" if offload else torch_device) + + if offload: + model.cpu() + torch.cuda.empty_cache() + ae.encoder.to(torch_device) + + init_image = None + init_image = np.array(Image.open(args.source_img_dir).convert('RGB')) + + shape = init_image.shape + + new_h = shape[0] if shape[0] % 16 == 0 else shape[0] - shape[0] % 16 + new_w = shape[1] if shape[1] % 16 == 0 else shape[1] - shape[1] % 16 + + init_image = init_image[:new_h, :new_w, :] + + width, height = init_image.shape[0], init_image.shape[1] + init_image = encode(init_image, torch_device, ae) + + rng = torch.Generator(device="cpu") + opts = SamplingOptions( + source_prompt=source_prompt, + target_prompt=target_prompt, + width=width, + height=height, + num_steps=num_steps, + guidance=guidance, + seed=seed, + ) + + if loop: + opts = parse_prompt(opts) + + while opts is not None: + if opts.seed is None: + opts.seed = rng.seed() + print(f"Generating with seed {opts.seed}:\n{opts.source_prompt}") + t0 = time.perf_counter() + + opts.seed = None + if offload: + ae = ae.cpu() + torch.cuda.empty_cache() + t5, clip = t5.to(torch_device), clip.to(torch_device) + + info = {} + info['feature_path'] = args.feature_path + info['feature'] = {} + info['inject_step'] = args.inject + if not os.path.exists(args.feature_path): + os.mkdir(args.feature_path) + + inp = prepare(t5, clip, init_image, prompt=opts.source_prompt) + inp_target = prepare(t5, clip, init_image, prompt=opts.target_prompt) + timesteps = get_schedule(opts.num_steps, inp["img"].shape[1], shift=(name != "flux-schnell")) + + # offload TEs to CPU, load model to gpu + if offload: + t5, clip = t5.cpu(), clip.cpu() + torch.cuda.empty_cache() + model = model.to(torch_device) + + # inversion initial noise + z, info = denoise(model, **inp, timesteps=timesteps, guidance=1, inverse=True, info=info) + + inp_target["img"] = z + + timesteps = get_schedule(opts.num_steps, inp_target["img"].shape[1], shift=(name != "flux-schnell")) + + # denoise initial noise + x, _ = denoise(model, **inp_target, timesteps=timesteps, guidance=guidance, inverse=False, info=info) + + if offload: + model.cpu() + torch.cuda.empty_cache() + ae.decoder.to(x.device) + + # decode latents to pixel space + batch_x = unpack(x.float(), opts.width, opts.height) + + for x in batch_x: + x = x.unsqueeze(0) + output_name = os.path.join(output_dir, "img_{idx}.jpg") + if not os.path.exists(output_dir): + os.makedirs(output_dir) + idx = 0 + else: + fns = [fn for fn in iglob(output_name.format(idx="*")) if re.search(r"img_[0-9]+\.jpg$", fn)] + if len(fns) > 0: + idx = max(int(fn.split("_")[-1].split(".")[0]) for fn in fns) + 1 + else: + idx = 0 + + with torch.autocast(device_type=torch_device.type, dtype=torch.bfloat16): + x = ae.decode(x) + + if torch.cuda.is_available(): + torch.cuda.synchronize() + t1 = time.perf_counter() + + fn = output_name.format(idx=idx) + print(f"Done in {t1 - t0:.1f}s. Saving {fn}") + # bring into PIL format and save + x = x.clamp(-1, 1) + x = embed_watermark(x.float()) + x = rearrange(x[0], "c h w -> h w c") + + img = Image.fromarray((127.5 * (x + 1.0)).cpu().byte().numpy()) + nsfw_score = [x["score"] for x in nsfw_classifier(img) if x["label"] == "nsfw"][0] + + if nsfw_score < NSFW_THRESHOLD: + exif_data = Image.Exif() + exif_data[ExifTags.Base.Software] = "AI generated;txt2img;flux" + exif_data[ExifTags.Base.Make] = "Black Forest Labs" + exif_data[ExifTags.Base.Model] = name + if add_sampling_metadata: + exif_data[ExifTags.Base.ImageDescription] = source_prompt + img.save(fn, exif=exif_data, quality=95, subsampling=0) + idx += 1 + else: + print("Your generated image may contain NSFW content.") + + if loop: + print("-" * 80) + opts = parse_prompt(opts) + else: + opts = None + +if __name__ == "__main__": + + parser = argparse.ArgumentParser(description='RF-Edit') + + parser.add_argument('--name', default='flux-dev', type=str, + help='flux model') + parser.add_argument('--source_img_dir', default='', type=str, + help='The path of the source image') + parser.add_argument('--source_prompt', type=str, + help='describe the content of the source image (or leaves it as null)') + parser.add_argument('--target_prompt', type=str, + help='describe the requirement of editing') + parser.add_argument('--feature_path', type=str, default='feature', + help='the path to save the feature ') + parser.add_argument('--guidance', type=float, default=5, + help='guidance scale') + parser.add_argument('--num_steps', type=int, default=25, + help='the number of timesteps for inversion and denoising') + parser.add_argument('--inject', type=int, default=20, + help='the number of timesteps which apply the feature sharing') + parser.add_argument('--output_dir', default='output', type=str, + help='the path of the edited image') + parser.add_argument('--offload', action='store_true', help='set it to True if the memory of GPU is not enough') + + args = parser.parse_args() + + main(args) diff --git a/src/examples/edit/boy.jpg b/src/examples/edit/boy.jpg new file mode 100644 index 0000000000000000000000000000000000000000..61d77437bf3af6729a7b7c3f4d6b50663679e49a Binary files /dev/null and b/src/examples/edit/boy.jpg differ diff --git a/src/examples/edit/hiking.jpg b/src/examples/edit/hiking.jpg new file mode 100644 index 0000000000000000000000000000000000000000..aa1081010558309925481db4079bb011996a30e3 Binary files /dev/null and b/src/examples/edit/hiking.jpg differ diff --git a/src/examples/edit/horse.jpg b/src/examples/edit/horse.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4c46d970462512f8313373fa66ad487487867761 Binary files /dev/null and b/src/examples/edit/horse.jpg differ diff --git a/src/examples/source/art.jpg b/src/examples/source/art.jpg new file mode 100644 index 0000000000000000000000000000000000000000..bd6e2d05ddf7ba90139f21cd4ce6a459b8de0dfe --- /dev/null +++ b/src/examples/source/art.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d4c7daf7d513265fe95efa65e4f4511e893bbd85c7ed30034548827fd6f5acc +size 1031315 diff --git a/src/examples/source/boy.jpg b/src/examples/source/boy.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3c5f3682d0e46c9286d0845bbdadc7b47b5a7c02 Binary files /dev/null and b/src/examples/source/boy.jpg differ diff --git a/src/examples/source/cartoon.jpg b/src/examples/source/cartoon.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4d6fb7079cfb0a88ea1a8cedac52e9dacee717bd Binary files /dev/null and b/src/examples/source/cartoon.jpg differ diff --git a/src/examples/source/hiking.jpg b/src/examples/source/hiking.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3ed1a4b784ecf7c04dcad24983816babc707915f Binary files /dev/null and b/src/examples/source/hiking.jpg differ diff --git a/src/examples/source/horse.jpg b/src/examples/source/horse.jpg new file mode 100644 index 0000000000000000000000000000000000000000..32de7f7988d5ede15a37de3635765937881c9c3c Binary files /dev/null and b/src/examples/source/horse.jpg differ diff --git a/src/examples/source/nobel.jpg b/src/examples/source/nobel.jpg new file mode 100644 index 0000000000000000000000000000000000000000..733b56bc4ba1e796c32d0ad8a83d97707cffc1c0 Binary files /dev/null and b/src/examples/source/nobel.jpg differ diff --git a/src/flux/__init__.py b/src/flux/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..43c365a49d6980e88acba10ef3069f110a59644a --- /dev/null +++ b/src/flux/__init__.py @@ -0,0 +1,11 @@ +try: + from ._version import version as __version__ # type: ignore + from ._version import version_tuple +except ImportError: + __version__ = "unknown (no version information available)" + version_tuple = (0, 0, "unknown", "noinfo") + +from pathlib import Path + +PACKAGE = __package__.replace("_", "-") +PACKAGE_ROOT = Path(__file__).parent diff --git a/src/flux/__main__.py b/src/flux/__main__.py new file mode 100644 index 0000000000000000000000000000000000000000..d5cf0fd2444d4cda4053fa74dad3371556b886e5 --- /dev/null +++ b/src/flux/__main__.py @@ -0,0 +1,4 @@ +from .cli import app + +if __name__ == "__main__": + app() diff --git a/src/flux/__pycache__/__init__.cpython-310.pyc b/src/flux/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..ec6a71952b52949b701cfcc15eef34793064a243 Binary files /dev/null and b/src/flux/__pycache__/__init__.cpython-310.pyc differ diff --git a/src/flux/__pycache__/math.cpython-310.pyc b/src/flux/__pycache__/math.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5016ec1096d22c8d1a80dd6eed4e9d04cba605f9 Binary files /dev/null and b/src/flux/__pycache__/math.cpython-310.pyc differ diff --git a/src/flux/__pycache__/math.cpython-38.pyc b/src/flux/__pycache__/math.cpython-38.pyc new file mode 100644 index 0000000000000000000000000000000000000000..3d4256f4ee31428b5f678266de2261a4843b0d25 Binary files /dev/null and b/src/flux/__pycache__/math.cpython-38.pyc differ diff --git a/src/flux/__pycache__/model.cpython-310.pyc b/src/flux/__pycache__/model.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..fa496d2d79d1ec8b514202017d6e2ed636ddad03 Binary files /dev/null and b/src/flux/__pycache__/model.cpython-310.pyc differ diff --git a/src/flux/__pycache__/sampling.cpython-310.pyc b/src/flux/__pycache__/sampling.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..26439559ca9bfb0ea41ad1ed4263552a4da83a6d Binary files /dev/null and b/src/flux/__pycache__/sampling.cpython-310.pyc differ diff --git a/src/flux/__pycache__/util.cpython-310.pyc b/src/flux/__pycache__/util.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..172de007735ec381b252488d209431eac240771f Binary files /dev/null and b/src/flux/__pycache__/util.cpython-310.pyc differ diff --git a/src/flux/_version.py b/src/flux/_version.py new file mode 100644 index 0000000000000000000000000000000000000000..a6d002cd7349c63930a679a51ac8d17453090f7c --- /dev/null +++ b/src/flux/_version.py @@ -0,0 +1,16 @@ +# file generated by setuptools_scm +# don't change, don't track in version control +TYPE_CHECKING = False +if TYPE_CHECKING: + from typing import Tuple, Union + VERSION_TUPLE = Tuple[Union[int, str], ...] +else: + VERSION_TUPLE = object + +version: str +__version__: str +__version_tuple__: VERSION_TUPLE +version_tuple: VERSION_TUPLE + +__version__ = version = '0.0.post0+d20241105' +__version_tuple__ = version_tuple = (0, 0, 'd20241105') diff --git a/src/flux/api.py b/src/flux/api.py new file mode 100644 index 0000000000000000000000000000000000000000..b08202adb35d2ffae320bb9b47f567e538837836 --- /dev/null +++ b/src/flux/api.py @@ -0,0 +1,194 @@ +import io +import os +import time +from pathlib import Path + +import requests +from PIL import Image + +API_ENDPOINT = "https://api.bfl.ml" + + +class ApiException(Exception): + def __init__(self, status_code: int, detail: str | list[dict] | None = None): + super().__init__() + self.detail = detail + self.status_code = status_code + + def __str__(self) -> str: + return self.__repr__() + + def __repr__(self) -> str: + if self.detail is None: + message = None + elif isinstance(self.detail, str): + message = self.detail + else: + message = "[" + ",".join(d["msg"] for d in self.detail) + "]" + return f"ApiException({self.status_code=}, {message=}, detail={self.detail})" + + +class ImageRequest: + def __init__( + self, + prompt: str, + width: int = 1024, + height: int = 1024, + name: str = "flux.1-pro", + num_steps: int = 50, + prompt_upsampling: bool = False, + seed: int | None = None, + validate: bool = True, + launch: bool = True, + api_key: str | None = None, + ): + """ + Manages an image generation request to the API. + + Args: + prompt: Prompt to sample + width: Width of the image in pixel + height: Height of the image in pixel + name: Name of the model + num_steps: Number of network evaluations + prompt_upsampling: Use prompt upsampling + seed: Fix the generation seed + validate: Run input validation + launch: Directly launches request + api_key: Your API key if not provided by the environment + + Raises: + ValueError: For invalid input + ApiException: For errors raised from the API + """ + if validate: + if name not in ["flux.1-pro"]: + raise ValueError(f"Invalid model {name}") + elif width % 32 != 0: + raise ValueError(f"width must be divisible by 32, got {width}") + elif not (256 <= width <= 1440): + raise ValueError(f"width must be between 256 and 1440, got {width}") + elif height % 32 != 0: + raise ValueError(f"height must be divisible by 32, got {height}") + elif not (256 <= height <= 1440): + raise ValueError(f"height must be between 256 and 1440, got {height}") + elif not (1 <= num_steps <= 50): + raise ValueError(f"steps must be between 1 and 50, got {num_steps}") + + self.request_json = { + "prompt": prompt, + "width": width, + "height": height, + "variant": name, + "steps": num_steps, + "prompt_upsampling": prompt_upsampling, + } + if seed is not None: + self.request_json["seed"] = seed + + self.request_id: str | None = None + self.result: dict | None = None + self._image_bytes: bytes | None = None + self._url: str | None = None + if api_key is None: + self.api_key = os.environ.get("BFL_API_KEY") + else: + self.api_key = api_key + + if launch: + self.request() + + def request(self): + """ + Request to generate the image. + """ + if self.request_id is not None: + return + response = requests.post( + f"{API_ENDPOINT}/v1/image", + headers={ + "accept": "application/json", + "x-key": self.api_key, + "Content-Type": "application/json", + }, + json=self.request_json, + ) + result = response.json() + if response.status_code != 200: + raise ApiException(status_code=response.status_code, detail=result.get("detail")) + self.request_id = response.json()["id"] + + def retrieve(self) -> dict: + """ + Wait for the generation to finish and retrieve response. + """ + if self.request_id is None: + self.request() + while self.result is None: + response = requests.get( + f"{API_ENDPOINT}/v1/get_result", + headers={ + "accept": "application/json", + "x-key": self.api_key, + }, + params={ + "id": self.request_id, + }, + ) + result = response.json() + if "status" not in result: + raise ApiException(status_code=response.status_code, detail=result.get("detail")) + elif result["status"] == "Ready": + self.result = result["result"] + elif result["status"] == "Pending": + time.sleep(0.5) + else: + raise ApiException(status_code=200, detail=f"API returned status '{result['status']}'") + return self.result + + @property + def bytes(self) -> bytes: + """ + Generated image as bytes. + """ + if self._image_bytes is None: + response = requests.get(self.url) + if response.status_code == 200: + self._image_bytes = response.content + else: + raise ApiException(status_code=response.status_code) + return self._image_bytes + + @property + def url(self) -> str: + """ + Public url to retrieve the image from + """ + if self._url is None: + result = self.retrieve() + self._url = result["sample"] + return self._url + + @property + def image(self) -> Image.Image: + """ + Load the image as a PIL Image + """ + return Image.open(io.BytesIO(self.bytes)) + + def save(self, path: str): + """ + Save the generated image to a local path + """ + suffix = Path(self.url).suffix + if not path.endswith(suffix): + path = path + suffix + Path(path).resolve().parent.mkdir(parents=True, exist_ok=True) + with open(path, "wb") as file: + file.write(self.bytes) + + +if __name__ == "__main__": + from fire import Fire + + Fire(ImageRequest) diff --git a/src/flux/math.py b/src/flux/math.py new file mode 100644 index 0000000000000000000000000000000000000000..5f4b67032ec24525c1caf2367803f1d473800197 --- /dev/null +++ b/src/flux/math.py @@ -0,0 +1,29 @@ +import torch +from einops import rearrange +from torch import Tensor + + +def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor) -> Tensor: + q, k = apply_rope(q, k, pe) + + x = torch.nn.functional.scaled_dot_product_attention(q, k, v) + x = rearrange(x, "B H L D -> B L (H D)") + + return x + +def rope(pos: Tensor, dim: int, theta: int) -> Tensor: + assert dim % 2 == 0 + scale = torch.arange(0, dim, 2, dtype=torch.float64, device=pos.device) / dim + omega = 1.0 / (theta**scale) + out = torch.einsum("...n,d->...nd", pos, omega) + out = torch.stack([torch.cos(out), -torch.sin(out), torch.sin(out), torch.cos(out)], dim=-1) + out = rearrange(out, "b n d (i j) -> b n d i j", i=2, j=2) + return out.float() + + +def apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor) -> tuple[Tensor, Tensor]: + xq_ = xq.float().reshape(*xq.shape[:-1], -1, 1, 2) + xk_ = xk.float().reshape(*xk.shape[:-1], -1, 1, 2) + xq_out = freqs_cis[..., 0] * xq_[..., 0] + freqs_cis[..., 1] * xq_[..., 1] + xk_out = freqs_cis[..., 0] * xk_[..., 0] + freqs_cis[..., 1] * xk_[..., 1] + return xq_out.reshape(*xq.shape).type_as(xq), xk_out.reshape(*xk.shape).type_as(xk) diff --git a/src/flux/model.py b/src/flux/model.py new file mode 100644 index 0000000000000000000000000000000000000000..94d0cb688e80bd546e643705d798db2739e042e2 --- /dev/null +++ b/src/flux/model.py @@ -0,0 +1,118 @@ +from dataclasses import dataclass + +import torch +from torch import Tensor, nn + +from flux.modules.layers import (DoubleStreamBlock, EmbedND, LastLayer, + MLPEmbedder, SingleStreamBlock, + timestep_embedding) + + +@dataclass +class FluxParams: + in_channels: int + vec_in_dim: int + context_in_dim: int + hidden_size: int + mlp_ratio: float + num_heads: int + depth: int + depth_single_blocks: int + axes_dim: list[int] + theta: int + qkv_bias: bool + guidance_embed: bool + + +class Flux(nn.Module): + """ + Transformer model for flow matching on sequences. + """ + + def __init__(self, params: FluxParams): + super().__init__() + + self.params = params + self.in_channels = params.in_channels + self.out_channels = self.in_channels + if params.hidden_size % params.num_heads != 0: + raise ValueError( + f"Hidden size {params.hidden_size} must be divisible by num_heads {params.num_heads}" + ) + pe_dim = params.hidden_size // params.num_heads + if sum(params.axes_dim) != pe_dim: + raise ValueError(f"Got {params.axes_dim} but expected positional dim {pe_dim}") + self.hidden_size = params.hidden_size + self.num_heads = params.num_heads + self.pe_embedder = EmbedND(dim=pe_dim, theta=params.theta, axes_dim=params.axes_dim) + self.img_in = nn.Linear(self.in_channels, self.hidden_size, bias=True) + self.time_in = MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size) + self.vector_in = MLPEmbedder(params.vec_in_dim, self.hidden_size) + self.guidance_in = ( + MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size) if params.guidance_embed else nn.Identity() + ) + self.txt_in = nn.Linear(params.context_in_dim, self.hidden_size) + + self.double_blocks = nn.ModuleList( + [ + DoubleStreamBlock( + self.hidden_size, + self.num_heads, + mlp_ratio=params.mlp_ratio, + qkv_bias=params.qkv_bias, + ) + for _ in range(params.depth) + ] + ) + + self.single_blocks = nn.ModuleList( + [ + SingleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio) + for _ in range(params.depth_single_blocks) + ] + ) + + self.final_layer = LastLayer(self.hidden_size, 1, self.out_channels) + + def forward( + self, + img: Tensor, + img_ids: Tensor, + txt: Tensor, + txt_ids: Tensor, + timesteps: Tensor, + y: Tensor, + guidance: Tensor | None = None, + info = None, + ) -> Tensor: + if img.ndim != 3 or txt.ndim != 3: + raise ValueError("Input img and txt tensors must have 3 dimensions.") + + # running on sequences img + img = self.img_in(img) + vec = self.time_in(timestep_embedding(timesteps, 256)) + if self.params.guidance_embed: + if guidance is None: + raise ValueError("Didn't get guidance strength for guidance distilled model.") + vec = vec + self.guidance_in(timestep_embedding(guidance, 256)) + vec = vec + self.vector_in(y) + txt = self.txt_in(txt) + + ids = torch.cat((txt_ids, img_ids), dim=1) + pe = self.pe_embedder(ids) + + for block in self.double_blocks: + img, txt = block(img=img, txt=txt, vec=vec, pe=pe, info=info) + + cnt = 0 + img = torch.cat((txt, img), 1) + info['type'] = 'single' + for block in self.single_blocks: + info['id'] = cnt + img, info = block(img, vec=vec, pe=pe, info=info) + cnt += 1 + + img = img[:, txt.shape[1] :, ...] + + img = self.final_layer(img, vec) # (N, T, patch_size ** 2 * out_channels) + return img, info diff --git a/src/flux/modules/__pycache__/autoencoder.cpython-310.pyc b/src/flux/modules/__pycache__/autoencoder.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..93b83675b726c9cf9f2c3e1d18019252d8535418 Binary files /dev/null and b/src/flux/modules/__pycache__/autoencoder.cpython-310.pyc differ diff --git a/src/flux/modules/__pycache__/conditioner.cpython-310.pyc b/src/flux/modules/__pycache__/conditioner.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..54788b3d472755d75fe72726e714d75b4a98f85f Binary files /dev/null and b/src/flux/modules/__pycache__/conditioner.cpython-310.pyc differ diff --git a/src/flux/modules/__pycache__/layers.cpython-310.pyc b/src/flux/modules/__pycache__/layers.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..457d85953dfe27da02b9f66e98749082c7f4978d Binary files /dev/null and b/src/flux/modules/__pycache__/layers.cpython-310.pyc differ diff --git a/src/flux/modules/autoencoder.py b/src/flux/modules/autoencoder.py new file mode 100644 index 0000000000000000000000000000000000000000..86bdec01bd09c872721fe267fe1bd83d32d5fdec --- /dev/null +++ b/src/flux/modules/autoencoder.py @@ -0,0 +1,313 @@ +from dataclasses import dataclass + +import torch +from einops import rearrange +from torch import Tensor, nn + + +@dataclass +class AutoEncoderParams: + resolution: int + in_channels: int + ch: int + out_ch: int + ch_mult: list[int] + num_res_blocks: int + z_channels: int + scale_factor: float + shift_factor: float + + +def swish(x: Tensor) -> Tensor: + return x * torch.sigmoid(x) + + +class AttnBlock(nn.Module): + def __init__(self, in_channels: int): + super().__init__() + self.in_channels = in_channels + + self.norm = nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) + + self.q = nn.Conv2d(in_channels, in_channels, kernel_size=1) + self.k = nn.Conv2d(in_channels, in_channels, kernel_size=1) + self.v = nn.Conv2d(in_channels, in_channels, kernel_size=1) + self.proj_out = nn.Conv2d(in_channels, in_channels, kernel_size=1) + + def attention(self, h_: Tensor) -> Tensor: + h_ = self.norm(h_) + q = self.q(h_) + k = self.k(h_) + v = self.v(h_) + + b, c, h, w = q.shape + q = rearrange(q, "b c h w -> b 1 (h w) c").contiguous() + k = rearrange(k, "b c h w -> b 1 (h w) c").contiguous() + v = rearrange(v, "b c h w -> b 1 (h w) c").contiguous() + h_ = nn.functional.scaled_dot_product_attention(q, k, v) + + return rearrange(h_, "b 1 (h w) c -> b c h w", h=h, w=w, c=c, b=b) + + def forward(self, x: Tensor) -> Tensor: + return x + self.proj_out(self.attention(x)) + + +class ResnetBlock(nn.Module): + def __init__(self, in_channels: int, out_channels: int): + super().__init__() + self.in_channels = in_channels + out_channels = in_channels if out_channels is None else out_channels + self.out_channels = out_channels + + self.norm1 = nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) + self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) + self.norm2 = nn.GroupNorm(num_groups=32, num_channels=out_channels, eps=1e-6, affine=True) + self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) + if self.in_channels != self.out_channels: + self.nin_shortcut = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) + + def forward(self, x): + h = x + h = self.norm1(h) + h = swish(h) + h = self.conv1(h) + + h = self.norm2(h) + h = swish(h) + h = self.conv2(h) + + if self.in_channels != self.out_channels: + x = self.nin_shortcut(x) + + return x + h + + +class Downsample(nn.Module): + def __init__(self, in_channels: int): + super().__init__() + # no asymmetric padding in torch conv, must do it ourselves + self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0) + + def forward(self, x: Tensor): + pad = (0, 1, 0, 1) + x = nn.functional.pad(x, pad, mode="constant", value=0) + x = self.conv(x) + return x + + +class Upsample(nn.Module): + def __init__(self, in_channels: int): + super().__init__() + self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) + + def forward(self, x: Tensor): + x = nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") + x = self.conv(x) + return x + + +class Encoder(nn.Module): + def __init__( + self, + resolution: int, + in_channels: int, + ch: int, + ch_mult: list[int], + num_res_blocks: int, + z_channels: int, + ): + super().__init__() + self.ch = ch + self.num_resolutions = len(ch_mult) + self.num_res_blocks = num_res_blocks + self.resolution = resolution + self.in_channels = in_channels + # downsampling + self.conv_in = nn.Conv2d(in_channels, self.ch, kernel_size=3, stride=1, padding=1) + + curr_res = resolution + in_ch_mult = (1,) + tuple(ch_mult) + self.in_ch_mult = in_ch_mult + self.down = nn.ModuleList() + block_in = self.ch + for i_level in range(self.num_resolutions): + block = nn.ModuleList() + attn = nn.ModuleList() + block_in = ch * in_ch_mult[i_level] + block_out = ch * ch_mult[i_level] + for _ in range(self.num_res_blocks): + block.append(ResnetBlock(in_channels=block_in, out_channels=block_out)) + block_in = block_out + down = nn.Module() + down.block = block + down.attn = attn + if i_level != self.num_resolutions - 1: + down.downsample = Downsample(block_in) + curr_res = curr_res // 2 + self.down.append(down) + + # middle + self.mid = nn.Module() + self.mid.block_1 = ResnetBlock(in_channels=block_in, out_channels=block_in) + self.mid.attn_1 = AttnBlock(block_in) + self.mid.block_2 = ResnetBlock(in_channels=block_in, out_channels=block_in) + + # end + self.norm_out = nn.GroupNorm(num_groups=32, num_channels=block_in, eps=1e-6, affine=True) + self.conv_out = nn.Conv2d(block_in, 2 * z_channels, kernel_size=3, stride=1, padding=1) + + def forward(self, x: Tensor) -> Tensor: + # downsampling + hs = [self.conv_in(x)] + for i_level in range(self.num_resolutions): + for i_block in range(self.num_res_blocks): + h = self.down[i_level].block[i_block](hs[-1]) + if len(self.down[i_level].attn) > 0: + h = self.down[i_level].attn[i_block](h) + hs.append(h) + if i_level != self.num_resolutions - 1: + hs.append(self.down[i_level].downsample(hs[-1])) + + # middle + h = hs[-1] + h = self.mid.block_1(h) + h = self.mid.attn_1(h) + h = self.mid.block_2(h) + # end + h = self.norm_out(h) + h = swish(h) + h = self.conv_out(h) + return h + + +class Decoder(nn.Module): + def __init__( + self, + ch: int, + out_ch: int, + ch_mult: list[int], + num_res_blocks: int, + in_channels: int, + resolution: int, + z_channels: int, + ): + super().__init__() + self.ch = ch + self.num_resolutions = len(ch_mult) + self.num_res_blocks = num_res_blocks + self.resolution = resolution + self.in_channels = in_channels + self.ffactor = 2 ** (self.num_resolutions - 1) + + # compute in_ch_mult, block_in and curr_res at lowest res + block_in = ch * ch_mult[self.num_resolutions - 1] + curr_res = resolution // 2 ** (self.num_resolutions - 1) + self.z_shape = (1, z_channels, curr_res, curr_res) + + # z to block_in + self.conv_in = nn.Conv2d(z_channels, block_in, kernel_size=3, stride=1, padding=1) + + # middle + self.mid = nn.Module() + self.mid.block_1 = ResnetBlock(in_channels=block_in, out_channels=block_in) + self.mid.attn_1 = AttnBlock(block_in) + self.mid.block_2 = ResnetBlock(in_channels=block_in, out_channels=block_in) + + # upsampling + self.up = nn.ModuleList() + for i_level in reversed(range(self.num_resolutions)): + block = nn.ModuleList() + attn = nn.ModuleList() + block_out = ch * ch_mult[i_level] + for _ in range(self.num_res_blocks + 1): + block.append(ResnetBlock(in_channels=block_in, out_channels=block_out)) + block_in = block_out + up = nn.Module() + up.block = block + up.attn = attn + if i_level != 0: + up.upsample = Upsample(block_in) + curr_res = curr_res * 2 + self.up.insert(0, up) # prepend to get consistent order + + # end + self.norm_out = nn.GroupNorm(num_groups=32, num_channels=block_in, eps=1e-6, affine=True) + self.conv_out = nn.Conv2d(block_in, out_ch, kernel_size=3, stride=1, padding=1) + + def forward(self, z: Tensor) -> Tensor: + # z to block_in + h = self.conv_in(z) + + # middle + h = self.mid.block_1(h) + h = self.mid.attn_1(h) + h = self.mid.block_2(h) + + # upsampling + for i_level in reversed(range(self.num_resolutions)): + for i_block in range(self.num_res_blocks + 1): + h = self.up[i_level].block[i_block](h) + if len(self.up[i_level].attn) > 0: + h = self.up[i_level].attn[i_block](h) + if i_level != 0: + h = self.up[i_level].upsample(h) + + # end + h = self.norm_out(h) + h = swish(h) + h = self.conv_out(h) + return h + + +class DiagonalGaussian(nn.Module): + def __init__(self, sample: bool = True, chunk_dim: int = 1): + super().__init__() + self.sample = sample + self.chunk_dim = chunk_dim + + def forward(self, z: Tensor) -> Tensor: + mean, logvar = torch.chunk(z, 2, dim=self.chunk_dim) + # import pdb;pdb.set_trace() + if self.sample: + std = torch.exp(0.5 * logvar) + return mean #+ std * torch.randn_like(mean) + else: + return mean + + +class AutoEncoder(nn.Module): + def __init__(self, params: AutoEncoderParams): + super().__init__() + self.encoder = Encoder( + resolution=params.resolution, + in_channels=params.in_channels, + ch=params.ch, + ch_mult=params.ch_mult, + num_res_blocks=params.num_res_blocks, + z_channels=params.z_channels, + ) + self.decoder = Decoder( + resolution=params.resolution, + in_channels=params.in_channels, + ch=params.ch, + out_ch=params.out_ch, + ch_mult=params.ch_mult, + num_res_blocks=params.num_res_blocks, + z_channels=params.z_channels, + ) + self.reg = DiagonalGaussian() + + self.scale_factor = params.scale_factor + self.shift_factor = params.shift_factor + + def encode(self, x: Tensor) -> Tensor: + z = self.reg(self.encoder(x)) + z = self.scale_factor * (z - self.shift_factor) + return z + + def decode(self, z: Tensor) -> Tensor: + z = z / self.scale_factor + self.shift_factor + return self.decoder(z) + + def forward(self, x: Tensor) -> Tensor: + return self.decode(self.encode(x)) diff --git a/src/flux/modules/conditioner.py b/src/flux/modules/conditioner.py new file mode 100644 index 0000000000000000000000000000000000000000..8b58f67b633d28a04e3d9342fce62ee1750e76f5 --- /dev/null +++ b/src/flux/modules/conditioner.py @@ -0,0 +1,38 @@ +from torch import Tensor, nn +from transformers import (CLIPTextModel, CLIPTokenizer, T5EncoderModel, + T5Tokenizer) + + +class HFEmbedder(nn.Module): + def __init__(self, version: str, max_length: int, is_clip, **hf_kwargs): + super().__init__() + self.is_clip = is_clip + self.max_length = max_length + self.output_key = "pooler_output" if self.is_clip else "last_hidden_state" + + if self.is_clip: + self.tokenizer: CLIPTokenizer = CLIPTokenizer.from_pretrained(version, max_length=max_length) + self.hf_module: CLIPTextModel = CLIPTextModel.from_pretrained(version, **hf_kwargs) + else: + self.tokenizer: T5Tokenizer = T5Tokenizer.from_pretrained(version, max_length=max_length) + self.hf_module: T5EncoderModel = T5EncoderModel.from_pretrained(version, **hf_kwargs) + + self.hf_module = self.hf_module.eval().requires_grad_(False) + + def forward(self, text: list[str]) -> Tensor: + batch_encoding = self.tokenizer( + text, + truncation=True, + max_length=self.max_length, + return_length=False, + return_overflowing_tokens=False, + padding="max_length", + return_tensors="pt", + ) + + outputs = self.hf_module( + input_ids=batch_encoding["input_ids"].to(self.hf_module.device), + attention_mask=None, + output_hidden_states=False, + ) + return outputs[self.output_key] diff --git a/src/flux/modules/layers.py b/src/flux/modules/layers.py new file mode 100644 index 0000000000000000000000000000000000000000..a9f822e6ab15d54040de521f67eecd00c475535e --- /dev/null +++ b/src/flux/modules/layers.py @@ -0,0 +1,280 @@ +import math +from dataclasses import dataclass + +import torch +from einops import rearrange +from torch import Tensor, nn + +from flux.math import attention, rope + +import os + +class EmbedND(nn.Module): + def __init__(self, dim: int, theta: int, axes_dim: list[int]): + super().__init__() + self.dim = dim + self.theta = theta + self.axes_dim = axes_dim + + def forward(self, ids: Tensor) -> Tensor: + n_axes = ids.shape[-1] + emb = torch.cat( + [rope(ids[..., i], self.axes_dim[i], self.theta) for i in range(n_axes)], + dim=-3, + ) + + return emb.unsqueeze(1) + + +def timestep_embedding(t: Tensor, dim, max_period=10000, time_factor: float = 1000.0): + """ + Create sinusoidal timestep embeddings. + :param t: a 1-D Tensor of N indices, one per batch element. + These may be fractional. + :param dim: the dimension of the output. + :param max_period: controls the minimum frequency of the embeddings. + :return: an (N, D) Tensor of positional embeddings. + """ + t = time_factor * t + half = dim // 2 + freqs = torch.exp(-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half).to( + t.device + ) + + args = t[:, None].float() * freqs[None] + embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) + if dim % 2: + embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) + if torch.is_floating_point(t): + embedding = embedding.to(t) + return embedding + + +class MLPEmbedder(nn.Module): + def __init__(self, in_dim: int, hidden_dim: int): + super().__init__() + self.in_layer = nn.Linear(in_dim, hidden_dim, bias=True) + self.silu = nn.SiLU() + self.out_layer = nn.Linear(hidden_dim, hidden_dim, bias=True) + + def forward(self, x: Tensor) -> Tensor: + return self.out_layer(self.silu(self.in_layer(x))) + + +class RMSNorm(torch.nn.Module): + def __init__(self, dim: int): + super().__init__() + self.scale = nn.Parameter(torch.ones(dim)) + + def forward(self, x: Tensor): + x_dtype = x.dtype + x = x.float() + rrms = torch.rsqrt(torch.mean(x**2, dim=-1, keepdim=True) + 1e-6) + return (x * rrms).to(dtype=x_dtype) * self.scale + + +class QKNorm(torch.nn.Module): + def __init__(self, dim: int): + super().__init__() + self.query_norm = RMSNorm(dim) + self.key_norm = RMSNorm(dim) + + def forward(self, q: Tensor, k: Tensor, v: Tensor) -> tuple[Tensor, Tensor]: + q = self.query_norm(q) + k = self.key_norm(k) + return q.to(v), k.to(v) + + +class SelfAttention(nn.Module): + def __init__(self, dim: int, num_heads: int = 8, qkv_bias: bool = False): + super().__init__() + self.num_heads = num_heads + head_dim = dim // num_heads + + self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) + self.norm = QKNorm(head_dim) + self.proj = nn.Linear(dim, dim) + + def forward(self, x: Tensor, pe: Tensor) -> Tensor: + qkv = self.qkv(x) + q, k, v = rearrange(qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads) + q, k = self.norm(q, k, v) + x = attention(q, k, v, pe=pe) + x = self.proj(x) + return x + + +@dataclass +class ModulationOut: + shift: Tensor + scale: Tensor + gate: Tensor + + +class Modulation(nn.Module): + def __init__(self, dim: int, double: bool): + super().__init__() + self.is_double = double + self.multiplier = 6 if double else 3 + self.lin = nn.Linear(dim, self.multiplier * dim, bias=True) + + def forward(self, vec: Tensor) -> tuple[ModulationOut, ModulationOut | None]: + out = self.lin(nn.functional.silu(vec))[:, None, :].chunk(self.multiplier, dim=-1) + + return ( + ModulationOut(*out[:3]), + ModulationOut(*out[3:]) if self.is_double else None, + ) + + +class DoubleStreamBlock(nn.Module): + def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float, qkv_bias: bool = False): + super().__init__() + + mlp_hidden_dim = int(hidden_size * mlp_ratio) + self.num_heads = num_heads + self.hidden_size = hidden_size + self.img_mod = Modulation(hidden_size, double=True) + self.img_norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6) + self.img_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias) + + self.img_norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6) + self.img_mlp = nn.Sequential( + nn.Linear(hidden_size, mlp_hidden_dim, bias=True), + nn.GELU(approximate="tanh"), + nn.Linear(mlp_hidden_dim, hidden_size, bias=True), + ) + + self.txt_mod = Modulation(hidden_size, double=True) + self.txt_norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6) + self.txt_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias) + + self.txt_norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6) + self.txt_mlp = nn.Sequential( + nn.Linear(hidden_size, mlp_hidden_dim, bias=True), + nn.GELU(approximate="tanh"), + nn.Linear(mlp_hidden_dim, hidden_size, bias=True), + ) + + def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor, info) -> tuple[Tensor, Tensor]: + img_mod1, img_mod2 = self.img_mod(vec) + txt_mod1, txt_mod2 = self.txt_mod(vec) + + # prepare image for attention + img_modulated = self.img_norm1(img) + img_modulated = (1 + img_mod1.scale) * img_modulated + img_mod1.shift + img_qkv = self.img_attn.qkv(img_modulated) + img_q, img_k, img_v = rearrange(img_qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads) + + img_q, img_k = self.img_attn.norm(img_q, img_k, img_v) + + # if info['inject']: + # if info['inverse']: + # print("!save! ",info['feature_path'] + str(info['t']) + '_' + str(info['second_order']) + '_' + str(info['id']) + '_' + info['type']) + # torch.save(img_q, info['feature_path'] + str(info['t']) + '_' + str(info['second_order']) + '_' + str(info['id']) + '_' + info['type'] + '_' + 'Q' + '.pth') + # if not info['inverse']: + # print("!load! ", info['feature_path'] + str(info['t']) + '_' + str(info['second_order']) + '_' + str(info['id']) + '_' + info['type']) + # img_q = torch.load(info['feature_path'] + str(info['t']) + '_' + str(info['second_order']) + '_' + str(info['id']) + '_' + info['type'] + '_' + 'Q' + '.pth', weights_only=True) + + # prepare txt for attention + txt_modulated = self.txt_norm1(txt) + txt_modulated = (1 + txt_mod1.scale) * txt_modulated + txt_mod1.shift + txt_qkv = self.txt_attn.qkv(txt_modulated) + txt_q, txt_k, txt_v = rearrange(txt_qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads) + txt_q, txt_k = self.txt_attn.norm(txt_q, txt_k, txt_v) + + # run actual attention + q = torch.cat((txt_q, img_q), dim=2) #[8, 24, 512, 128] + [8, 24, 900, 128] -> [8, 24, 1412, 128] + k = torch.cat((txt_k, img_k), dim=2) + v = torch.cat((txt_v, img_v), dim=2) + # import pdb;pdb.set_trace() + attn = attention(q, k, v, pe=pe) + + txt_attn, img_attn = attn[:, : txt.shape[1]], attn[:, txt.shape[1] :] + + # calculate the img bloks + img = img + img_mod1.gate * self.img_attn.proj(img_attn) + img = img + img_mod2.gate * self.img_mlp((1 + img_mod2.scale) * self.img_norm2(img) + img_mod2.shift) + + # calculate the txt bloks + txt = txt + txt_mod1.gate * self.txt_attn.proj(txt_attn) + txt = txt + txt_mod2.gate * self.txt_mlp((1 + txt_mod2.scale) * self.txt_norm2(txt) + txt_mod2.shift) + return img, txt + + +class SingleStreamBlock(nn.Module): + """ + A DiT block with parallel linear layers as described in + https://arxiv.org/abs/2302.05442 and adapted modulation interface. + """ + + def __init__( + self, + hidden_size: int, + num_heads: int, + mlp_ratio: float = 4.0, + qk_scale: float | None = None, + ): + super().__init__() + self.hidden_dim = hidden_size + self.num_heads = num_heads + head_dim = hidden_size // num_heads + self.scale = qk_scale or head_dim**-0.5 + + self.mlp_hidden_dim = int(hidden_size * mlp_ratio) + # qkv and mlp_in + self.linear1 = nn.Linear(hidden_size, hidden_size * 3 + self.mlp_hidden_dim) + # proj and mlp_out + self.linear2 = nn.Linear(hidden_size + self.mlp_hidden_dim, hidden_size) + + self.norm = QKNorm(head_dim) + + self.hidden_size = hidden_size + self.pre_norm = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6) + + self.mlp_act = nn.GELU(approximate="tanh") + self.modulation = Modulation(hidden_size, double=False) + + def forward(self, x: Tensor, vec: Tensor, pe: Tensor, info) -> Tensor: + mod, _ = self.modulation(vec) + x_mod = (1 + mod.scale) * self.pre_norm(x) + mod.shift + qkv, mlp = torch.split(self.linear1(x_mod), [3 * self.hidden_size, self.mlp_hidden_dim], dim=-1) + + q, k, v = rearrange(qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads) + q, k = self.norm(q, k, v) + + # Note: If the memory of your device is not enough, you may consider uncomment the following code. + # if info['inject'] and info['id'] > 19: + # store_path = os.path.join(info['feature_path'], str(info['t']) + '_' + str(info['second_order']) + '_' + str(info['id']) + '_' + info['type'] + '_' + 'V' + '.pth') + # if info['inverse']: + # torch.save(v, store_path) + # if not info['inverse']: + # v = torch.load(store_path, weights_only=True) + + # Save the features in the memory + if info['inject'] and info['id'] > 19: + feature_name = str(info['t']) + '_' + str(info['second_order']) + '_' + str(info['id']) + '_' + info['type'] + '_' + 'V' + if info['inverse']: + info['feature'][feature_name] = v.cpu() + else: + v = info['feature'][feature_name].cuda() + + # compute attention + attn = attention(q, k, v, pe=pe) + # compute activation in mlp stream, cat again and run second linear layer + output = self.linear2(torch.cat((attn, self.mlp_act(mlp)), 2)) + return x + mod.gate * output, info + + +class LastLayer(nn.Module): + def __init__(self, hidden_size: int, patch_size: int, out_channels: int): + super().__init__() + self.norm_final = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6) + self.linear = nn.Linear(hidden_size, patch_size * patch_size * out_channels, bias=True) + self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 2 * hidden_size, bias=True)) + + def forward(self, x: Tensor, vec: Tensor) -> Tensor: + shift, scale = self.adaLN_modulation(vec).chunk(2, dim=1) + x = (1 + scale[:, None, :]) * self.norm_final(x) + shift[:, None, :] + x = self.linear(x) + return x diff --git a/src/flux/sampling.py b/src/flux/sampling.py new file mode 100644 index 0000000000000000000000000000000000000000..2be1d5428504c930a58e51707d26cbb370ade6ce --- /dev/null +++ b/src/flux/sampling.py @@ -0,0 +1,147 @@ +import math +from typing import Callable + +import torch +from einops import rearrange, repeat +from torch import Tensor + +from .model import Flux +from .modules.conditioner import HFEmbedder + + +def prepare(t5: HFEmbedder, clip: HFEmbedder, img: Tensor, prompt: str | list[str]) -> dict[str, Tensor]: + bs, c, h, w = img.shape + if bs == 1 and not isinstance(prompt, str): + bs = len(prompt) + + img = rearrange(img, "b c (h ph) (w pw) -> b (h w) (c ph pw)", ph=2, pw=2) + if img.shape[0] == 1 and bs > 1: + img = repeat(img, "1 ... -> bs ...", bs=bs) + + img_ids = torch.zeros(h // 2, w // 2, 3) + img_ids[..., 1] = img_ids[..., 1] + torch.arange(h // 2)[:, None] + img_ids[..., 2] = img_ids[..., 2] + torch.arange(w // 2)[None, :] + img_ids = repeat(img_ids, "h w c -> b (h w) c", b=bs) + + if isinstance(prompt, str): + prompt = [prompt] + txt = t5(prompt) + if txt.shape[0] == 1 and bs > 1: + txt = repeat(txt, "1 ... -> bs ...", bs=bs) + txt_ids = torch.zeros(bs, txt.shape[1], 3) + + vec = clip(prompt) + if vec.shape[0] == 1 and bs > 1: + vec = repeat(vec, "1 ... -> bs ...", bs=bs) + + return { + "img": img, + "img_ids": img_ids.to(img.device), + "txt": txt.to(img.device), + "txt_ids": txt_ids.to(img.device), + "vec": vec.to(img.device), + } + + +def time_shift(mu: float, sigma: float, t: Tensor): + return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma) + + +def get_lin_function( + x1: float = 256, y1: float = 0.5, x2: float = 4096, y2: float = 1.15 +) -> Callable[[float], float]: + m = (y2 - y1) / (x2 - x1) + b = y1 - m * x1 + return lambda x: m * x + b + + +def get_schedule( + num_steps: int, + image_seq_len: int, + base_shift: float = 0.5, + max_shift: float = 1.15, + shift: bool = True, +) -> list[float]: + # extra step for zero + timesteps = torch.linspace(1, 0, num_steps + 1) + + # shifting the schedule to favor high timesteps for higher signal images + if shift: + # estimate mu based on linear estimation between two points + mu = get_lin_function(y1=base_shift, y2=max_shift)(image_seq_len) + timesteps = time_shift(mu, 1.0, timesteps) + + return timesteps.tolist() + + +def denoise( + model: Flux, + # model input + img: Tensor, + img_ids: Tensor, + txt: Tensor, + txt_ids: Tensor, + vec: Tensor, + # sampling parameters + timesteps: list[float], + inverse, + info, + guidance: float = 4.0 +): + # this is ignored for schnell + inject_list = [True] * info['inject_step'] + [False] * (len(timesteps[:-1]) - info['inject_step']) + + if inverse: + timesteps = timesteps[::-1] + inject_list = inject_list[::-1] + guidance_vec = torch.full((img.shape[0],), guidance, device=img.device, dtype=img.dtype) + + step_list = [] + for i, (t_curr, t_prev) in enumerate(zip(timesteps[:-1], timesteps[1:])): + t_vec = torch.full((img.shape[0],), t_curr, dtype=img.dtype, device=img.device) + info['t'] = t_prev if inverse else t_curr + info['inverse'] = inverse + info['second_order'] = False + info['inject'] = inject_list[i] + + pred, info = model( + img=img, + img_ids=img_ids, + txt=txt, + txt_ids=txt_ids, + y=vec, + timesteps=t_vec, + guidance=guidance_vec, + info=info + ) + + img_mid = img + (t_prev - t_curr) / 2 * pred + + t_vec_mid = torch.full((img.shape[0],), (t_curr + (t_prev - t_curr) / 2), dtype=img.dtype, device=img.device) + info['second_order'] = True + pred_mid, info = model( + img=img_mid, + img_ids=img_ids, + txt=txt, + txt_ids=txt_ids, + y=vec, + timesteps=t_vec_mid, + guidance=guidance_vec, + info=info + ) + + first_order = (pred_mid - pred) / ((t_prev - t_curr) / 2) + img = img + (t_prev - t_curr) * pred + 0.5 * (t_prev - t_curr) ** 2 * first_order + + return img, info + + +def unpack(x: Tensor, height: int, width: int) -> Tensor: + return rearrange( + x, + "b (h w) (c ph pw) -> b c (h ph) (w pw)", + h=math.ceil(height / 16), + w=math.ceil(width / 16), + ph=2, + pw=2, + ) diff --git a/src/flux/util.py b/src/flux/util.py new file mode 100644 index 0000000000000000000000000000000000000000..f4c3a2d0058be85f1a74d9e4d61c40a5414ac8b2 --- /dev/null +++ b/src/flux/util.py @@ -0,0 +1,201 @@ +import os +from dataclasses import dataclass + +import torch +from einops import rearrange +from huggingface_hub import hf_hub_download +from imwatermark import WatermarkEncoder +from safetensors.torch import load_file as load_sft + +from flux.model import Flux, FluxParams +from flux.modules.autoencoder import AutoEncoder, AutoEncoderParams +from flux.modules.conditioner import HFEmbedder + + +@dataclass +class ModelSpec: + params: FluxParams + ae_params: AutoEncoderParams + ckpt_path: str | None + ae_path: str | None + repo_id: str | None + repo_flow: str | None + repo_ae: str | None + +configs = { + "flux-dev": ModelSpec( + repo_id="black-forest-labs/FLUX.1-dev", + repo_flow="flux1-dev.safetensors", + repo_ae="ae.safetensors", + ckpt_path=os.getenv("FLUX_DEV"), + params=FluxParams( + in_channels=64, + vec_in_dim=768, + context_in_dim=4096, + hidden_size=3072, + mlp_ratio=4.0, + num_heads=24, + depth=19, + depth_single_blocks=38, + axes_dim=[16, 56, 56], + theta=10_000, + qkv_bias=True, + guidance_embed=True, + ), + ae_path=os.getenv("AE"), + ae_params=AutoEncoderParams( + resolution=256, + in_channels=3, + ch=128, + out_ch=3, + ch_mult=[1, 2, 4, 4], + num_res_blocks=2, + z_channels=16, + scale_factor=0.3611, + shift_factor=0.1159, + ), + ), + "flux-schnell": ModelSpec( + repo_id="black-forest-labs/FLUX.1-schnell", + repo_flow="flux1-schnell.safetensors", + repo_ae="ae.safetensors", + ckpt_path=os.getenv("FLUX_SCHNELL"), + params=FluxParams( + in_channels=64, + vec_in_dim=768, + context_in_dim=4096, + hidden_size=3072, + mlp_ratio=4.0, + num_heads=24, + depth=19, + depth_single_blocks=38, + axes_dim=[16, 56, 56], + theta=10_000, + qkv_bias=True, + guidance_embed=False, + ), + ae_path=os.getenv("AE"), + ae_params=AutoEncoderParams( + resolution=256, + in_channels=3, + ch=128, + out_ch=3, + ch_mult=[1, 2, 4, 4], + num_res_blocks=2, + z_channels=16, + scale_factor=0.3611, + shift_factor=0.1159, + ), + ), +} + + +def print_load_warning(missing: list[str], unexpected: list[str]) -> None: + if len(missing) > 0 and len(unexpected) > 0: + print(f"Got {len(missing)} missing keys:\n\t" + "\n\t".join(missing)) + print("\n" + "-" * 79 + "\n") + print(f"Got {len(unexpected)} unexpected keys:\n\t" + "\n\t".join(unexpected)) + elif len(missing) > 0: + print(f"Got {len(missing)} missing keys:\n\t" + "\n\t".join(missing)) + elif len(unexpected) > 0: + print(f"Got {len(unexpected)} unexpected keys:\n\t" + "\n\t".join(unexpected)) + + +def load_flow_model(name: str, device: str | torch.device = "cuda", hf_download: bool = True): + # Loading Flux + print("Init model") + + ckpt_path = configs[name].ckpt_path + if ( + ckpt_path is None + and configs[name].repo_id is not None + and configs[name].repo_flow is not None + and hf_download + ): + ckpt_path = hf_hub_download(configs[name].repo_id, configs[name].repo_flow) + + with torch.device("meta" if ckpt_path is not None else device): + model = Flux(configs[name].params).to(torch.bfloat16) + + if ckpt_path is not None: + print("Loading checkpoint") + # load_sft doesn't support torch.device + sd = load_sft(ckpt_path, device=str(device)) + missing, unexpected = model.load_state_dict(sd, strict=False, assign=True) + print_load_warning(missing, unexpected) + return model + + +def load_t5(device: str | torch.device = "cuda", max_length: int = 512) -> HFEmbedder: + # max length 64, 128, 256 and 512 should work (if your sequence is short enough) + return HFEmbedder("google/t5-v1_1-xxl", max_length=max_length, is_clip=False, torch_dtype=torch.bfloat16).to(device) + + +def load_clip(device: str | torch.device = "cuda") -> HFEmbedder: + return HFEmbedder("openai/clip-vit-large-patch14", max_length=77, is_clip=True, torch_dtype=torch.bfloat16).to(device) + + +def load_ae(name: str, device: str | torch.device = "cuda", hf_download: bool = True) -> AutoEncoder: + ckpt_path = configs[name].ae_path + if ( + ckpt_path is None + and configs[name].repo_id is not None + and configs[name].repo_ae is not None + and hf_download + ): + ckpt_path = hf_hub_download(configs[name].repo_id, configs[name].repo_ae) + + # Loading the autoencoder + print("Init AE") + with torch.device("meta" if ckpt_path is not None else device): + ae = AutoEncoder(configs[name].ae_params) + + if ckpt_path is not None: + sd = load_sft(ckpt_path, device=str(device)) + missing, unexpected = ae.load_state_dict(sd, strict=False, assign=True) + print_load_warning(missing, unexpected) + return ae + + +class WatermarkEmbedder: + def __init__(self, watermark): + self.watermark = watermark + self.num_bits = len(WATERMARK_BITS) + self.encoder = WatermarkEncoder() + self.encoder.set_watermark("bits", self.watermark) + + def __call__(self, image: torch.Tensor) -> torch.Tensor: + """ + Adds a predefined watermark to the input image + + Args: + image: ([N,] B, RGB, H, W) in range [-1, 1] + + Returns: + same as input but watermarked + """ + image = 0.5 * image + 0.5 + squeeze = len(image.shape) == 4 + if squeeze: + image = image[None, ...] + n = image.shape[0] + image_np = rearrange((255 * image).detach().cpu(), "n b c h w -> (n b) h w c").numpy()[:, :, :, ::-1] + # torch (b, c, h, w) in [0, 1] -> numpy (b, h, w, c) [0, 255] + # watermarking libary expects input as cv2 BGR format + for k in range(image_np.shape[0]): + image_np[k] = self.encoder.encode(image_np[k], "dwtDct") + image = torch.from_numpy(rearrange(image_np[:, :, :, ::-1], "(n b) h w c -> n b c h w", n=n)).to( + image.device + ) + image = torch.clamp(image / 255, min=0.0, max=1.0) + if squeeze: + image = image[0] + image = 2 * image - 1 + return image + + +# A fixed 48-bit message that was chosen at random +WATERMARK_MESSAGE = 0b001010101111111010000111100111001111010100101110 +# bin(x)[2:] gives bits of x as str, use int to convert them to 0/1 +WATERMARK_BITS = [int(bit) for bit in bin(WATERMARK_MESSAGE)[2:]] +embed_watermark = WatermarkEmbedder(WATERMARK_BITS) diff --git a/src/gradio_demo.py b/src/gradio_demo.py new file mode 100644 index 0000000000000000000000000000000000000000..6db459d518608d7c432ebf7ae13783b5b0a79a53 --- /dev/null +++ b/src/gradio_demo.py @@ -0,0 +1,243 @@ +import os +import re +import time +from io import BytesIO +import uuid +from dataclasses import dataclass +from glob import iglob +import argparse +from einops import rearrange +from fire import Fire +from PIL import ExifTags, Image + + +import torch +import torch.nn.functional as F +import gradio as gr +import numpy as np +from transformers import pipeline + +from flux.sampling import denoise, get_schedule, prepare, unpack +from flux.util import (configs, embed_watermark, load_ae, load_clip, load_flow_model, load_t5) + +@dataclass +class SamplingOptions: + source_prompt: str + target_prompt: str + # prompt: str + width: int + height: int + num_steps: int + guidance: float + seed: int | None + +@torch.inference_mode() +def encode(init_image, torch_device, ae): + init_image = torch.from_numpy(init_image).permute(2, 0, 1).float() / 127.5 - 1 + init_image = init_image.unsqueeze(0) + init_image = init_image.to(torch_device) + with torch.no_grad(): + init_image = ae.encode(init_image.to()).to(torch.bfloat16) + return init_image + + +class FluxEditor: + def __init__(self, args): + self.args = args + self.device = torch.device(args.device) + self.offload = args.offload + self.name = args.name + self.is_schnell = args.name == "flux-schnell" + + self.feature_path = 'feature' + self.output_dir = 'result' + self.add_sampling_metadata = True + + if self.name not in configs: + available = ", ".join(configs.keys()) + raise ValueError(f"Got unknown model name: {name}, chose from {available}") + + # init all components + self.t5 = load_t5(self.device, max_length=256 if self.name == "flux-schnell" else 512) + self.clip = load_clip(self.device) + self.model = load_flow_model(self.name, device="cpu" if self.offload else self.device) + self.ae = load_ae(self.name, device="cpu" if self.offload else self.device) + self.t5.eval() + self.clip.eval() + self.ae.eval() + self.model.eval() + + if self.offload: + self.model.cpu() + torch.cuda.empty_cache() + self.ae.encoder.to(self.device) + + @torch.inference_mode() + def edit(self, init_image, source_prompt, target_prompt, num_steps, inject_step, guidance, seed): + torch.cuda.empty_cache() + seed = None + # if seed == -1: + # seed = None + + shape = init_image.shape + + new_h = shape[0] if shape[0] % 16 == 0 else shape[0] - shape[0] % 16 + new_w = shape[1] if shape[1] % 16 == 0 else shape[1] - shape[1] % 16 + + init_image = init_image[:new_h, :new_w, :] + + width, height = init_image.shape[0], init_image.shape[1] + init_image = encode(init_image, self.device, self.ae) + + print(init_image.shape) + + rng = torch.Generator(device="cpu") + opts = SamplingOptions( + source_prompt=source_prompt, + target_prompt=target_prompt, + width=width, + height=height, + num_steps=num_steps, + guidance=guidance, + seed=seed, + ) + if opts.seed is None: + opts.seed = torch.Generator(device="cpu").seed() + + print(f"Generating with seed {opts.seed}:\n{opts.source_prompt}") + t0 = time.perf_counter() + + opts.seed = None + if self.offload: + self.ae = self.ae.cpu() + torch.cuda.empty_cache() + self.t5, self.clip = self.t5.to(self.device), self.clip.to(self.device) + + #############inverse####################### + info = {} + info['feature'] = {} + info['inject_step'] = inject_step + + if not os.path.exists(self.feature_path): + os.mkdir(self.feature_path) + + with torch.no_grad(): + inp = prepare(self.t5, self.clip, init_image, prompt=opts.source_prompt) + inp_target = prepare(self.t5, self.clip, init_image, prompt=opts.target_prompt) + timesteps = get_schedule(opts.num_steps, inp["img"].shape[1], shift=(self.name != "flux-schnell")) + + # offload TEs to CPU, load model to gpu + if self.offload: + self.t5, self.clip = self.t5.cpu(), self.clip.cpu() + torch.cuda.empty_cache() + self.model = self.model.to(self.device) + + # inversion initial noise + with torch.no_grad(): + z, info = denoise(self.model, **inp, timesteps=timesteps, guidance=1, inverse=True, info=info) + + inp_target["img"] = z + + timesteps = get_schedule(opts.num_steps, inp_target["img"].shape[1], shift=(self.name != "flux-schnell")) + + # denoise initial noise + x, _ = denoise(self.model, **inp_target, timesteps=timesteps, guidance=guidance, inverse=False, info=info) + + # offload model, load autoencoder to gpu + if self.offload: + self.model.cpu() + torch.cuda.empty_cache() + self.ae.decoder.to(x.device) + + # decode latents to pixel space + x = unpack(x.float(), opts.width, opts.height) + + output_name = os.path.join(self.output_dir, "img_{idx}.jpg") + if not os.path.exists(self.output_dir): + os.makedirs(self.output_dir) + idx = 0 + else: + fns = [fn for fn in iglob(output_name.format(idx="*")) if re.search(r"img_[0-9]+\.jpg$", fn)] + if len(fns) > 0: + idx = max(int(fn.split("_")[-1].split(".")[0]) for fn in fns) + 1 + else: + idx = 0 + + with torch.autocast(device_type=self.device.type, dtype=torch.bfloat16): + x = self.ae.decode(x) + + if torch.cuda.is_available(): + torch.cuda.synchronize() + t1 = time.perf_counter() + + fn = output_name.format(idx=idx) + print(f"Done in {t1 - t0:.1f}s. Saving {fn}") + # bring into PIL format and save + x = x.clamp(-1, 1) + x = embed_watermark(x.float()) + x = rearrange(x[0], "c h w -> h w c") + + img = Image.fromarray((127.5 * (x + 1.0)).cpu().byte().numpy()) + exif_data = Image.Exif() + exif_data[ExifTags.Base.Software] = "AI generated;txt2img;flux" + exif_data[ExifTags.Base.Make] = "Black Forest Labs" + exif_data[ExifTags.Base.Model] = self.name + if self.add_sampling_metadata: + exif_data[ExifTags.Base.ImageDescription] = source_prompt + img.save(fn, exif=exif_data, quality=95, subsampling=0) + + + print("End Edit") + return img + + + +def create_demo(model_name: str, device: str = "cuda" if torch.cuda.is_available() else "cpu", offload: bool = False): + editor = FluxEditor(args) + is_schnell = model_name == "flux-schnell" + + with gr.Blocks() as demo: + gr.Markdown(f"# RF-Edit Demo (FLUX for image editing)") + + with gr.Row(): + with gr.Column(): + source_prompt = gr.Textbox(label="Source Prompt", value="") + target_prompt = gr.Textbox(label="Target Prompt", value="") + init_image = gr.Image(label="Input Image", visible=True) + + + generate_btn = gr.Button("Generate") + + with gr.Column(): + with gr.Accordion("Advanced Options", open=True): + num_steps = gr.Slider(1, 30, 25, step=1, label="Number of steps") + inject_step = gr.Slider(1, 15, 5, step=1, label="Number of inject steps") + guidance = gr.Slider(1.0, 10.0, 2, step=0.1, label="Guidance", interactive=not is_schnell) + # seed = gr.Textbox(0, label="Seed (-1 for random)", visible=False) + # add_sampling_metadata = gr.Checkbox(label="Add sampling parameters to metadata?", value=False) + + output_image = gr.Image(label="Generated Image") + + generate_btn.click( + fn=editor.edit, + inputs=[init_image, source_prompt, target_prompt, num_steps, inject_step, guidance], + outputs=[output_image] + ) + + + return demo + + +if __name__ == "__main__": + import argparse + parser = argparse.ArgumentParser(description="Flux") + parser.add_argument("--name", type=str, default="flux-dev", choices=list(configs.keys()), help="Model name") + parser.add_argument("--device", type=str, default="cuda" if torch.cuda.is_available() else "cpu", help="Device to use") + parser.add_argument("--offload", action="store_true", help="Offload model to CPU when not in use") + parser.add_argument("--share", action="store_true", help="Create a public link to your demo") + + parser.add_argument("--port", type=int, default=41035) + args = parser.parse_args() + + demo = create_demo(args.name, args.device, args.offload) + demo.launch(server_name='0.0.0.0', share=args.share, server_port=args.port) diff --git a/src/run_art_batman.sh b/src/run_art_batman.sh new file mode 100644 index 0000000000000000000000000000000000000000..ecdc8233774aac9d60805192f9d3321a9c14dd1e --- /dev/null +++ b/src/run_art_batman.sh @@ -0,0 +1,8 @@ +python edit.py --source_prompt "" \ + --target_prompt "a vivid depiction of the Batman, featuring rich, dynamic colors, and a blend of realistic and abstract elements with dynamic splatter art." \ + --guidance 2 \ + --source_img_dir '/examples/source/art.jpg' \ + --num_steps 25 \ + --inject 5 \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/art/' diff --git a/src/run_art_mari.sh b/src/run_art_mari.sh new file mode 100644 index 0000000000000000000000000000000000000000..552803841df9353936f6157849517d0b40324482 --- /dev/null +++ b/src/run_art_mari.sh @@ -0,0 +1,8 @@ +python edit.py --source_prompt "" \ + --target_prompt "a vivid depiction of the Marilyn Monroe, featuring rich, dynamic colors, and a blend of realistic and abstract elements with dynamic splatter art." \ + --guidance 2 \ + --source_img_dir '/examples/source/art.jpg' \ + --num_steps 25 \ + --inject 3 \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/art/' diff --git a/src/run_boy.sh b/src/run_boy.sh new file mode 100644 index 0000000000000000000000000000000000000000..585c3efe37c23e67ba3ec3a42c79695d94deb46b --- /dev/null +++ b/src/run_boy.sh @@ -0,0 +1,10 @@ +python edit.py --source_prompt "A young boy is playing with a toy airplane on the grassy front lawn of a suburban house, with a blue sky and fluffy clouds above." \ + --target_prompt "A young boy is playing with a toy airplane on the grassy front lawn of a suburban house, with a small brown dog playing beside him, and a blue sky with fluffy clouds above." \ + --guidance 2 \ + --source_img_dir 'examples/source/boy.jpg' \ + --num_steps 15 --offload \ + --inject 2 \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/dog' + + diff --git a/src/run_cartoon_ein.sh b/src/run_cartoon_ein.sh new file mode 100644 index 0000000000000000000000000000000000000000..e87eac29b888684996fbc8db52bf2cc669941b9b --- /dev/null +++ b/src/run_cartoon_ein.sh @@ -0,0 +1,8 @@ +python edit.py --source_prompt "" \ + --target_prompt "a cartoon style Albert Einstein raising his left hand " \ + --guidance 2 \ + --source_img_dir 'examples/source/cartoon.jpg' \ + --num_steps 25 \ + --inject 2 \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/cartoon/' \ No newline at end of file diff --git a/src/run_cartoon_herry.sh b/src/run_cartoon_herry.sh new file mode 100644 index 0000000000000000000000000000000000000000..9a492abed563c8a66ea473d54c167fb88bfc6d3e --- /dev/null +++ b/src/run_cartoon_herry.sh @@ -0,0 +1,8 @@ +python edit.py --source_prompt "" \ + --target_prompt "a cartoon style Herry Potter raising his left hand " \ + --guidance 2 \ + --source_img_dir 'examples/source/cartoon.jpg' \ + --num_steps 25 \ + --inject 2 \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/cartoon/' \ No newline at end of file diff --git a/src/run_hiking.sh b/src/run_hiking.sh new file mode 100644 index 0000000000000000000000000000000000000000..88105e87633100ed46334632190f3df46d87baa7 --- /dev/null +++ b/src/run_hiking.sh @@ -0,0 +1,9 @@ +python edit.py --source_prompt "A woman hiking on a trail with mountains in the distance, carrying a backpack." \ + --target_prompt "A woman hiking on a trail with mountains in the distance, carrying a backpack and holding a hiking stick." \ + --guidance 2 \ + --source_img_dir 'examples/source/hiking.jpg' \ + --num_steps 15 \ + --inject 2 --offload \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/hiking/' + diff --git a/src/run_horse.sh b/src/run_horse.sh new file mode 100644 index 0000000000000000000000000000000000000000..2f966eddaadebb334ef0e2a5dd0c2c8a1c0b3059 --- /dev/null +++ b/src/run_horse.sh @@ -0,0 +1,9 @@ + +python edit.py --source_prompt "A young boy is riding a brown horse in a countryside field, with a large tree in the background." \ + --target_prompt "A young boy is riding a camel in a countryside field, with a large tree in the background." \ + --guidance 2 \ + --source_img_dir 'examples/source/horse.jpg' \ + --num_steps 15 \ + --inject 3 --offload \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/horse/' diff --git a/src/run_nobel_biden.sh b/src/run_nobel_biden.sh new file mode 100644 index 0000000000000000000000000000000000000000..c64514bcf603acf3c369aabfc9e11f356e887516 --- /dev/null +++ b/src/run_nobel_biden.sh @@ -0,0 +1,8 @@ +python edit.py --source_prompt "" \ + --target_prompt "A minimalistic line-drawing portrait of Joe Biden with black lines and light brown shadow" \ + --guidance 2.5 \ + --source_img_dir 'examples/source/nobel.jpg' \ + --num_steps 25 \ + --inject 2 \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/nobel/' diff --git a/src/run_nobel_trump.sh b/src/run_nobel_trump.sh new file mode 100644 index 0000000000000000000000000000000000000000..d3f49d9101969f0ad2e8475ef53ec9f18106cbf1 --- /dev/null +++ b/src/run_nobel_trump.sh @@ -0,0 +1,8 @@ +python edit.py --source_prompt "" \ + --target_prompt "A minimalistic line-drawing portrait of Donald Trump with black lines and brown shadow" \ + --guidance 2.5 \ + --source_img_dir 'examples/source/nobel.jpg' \ + --num_steps 25 \ + --inject 3 \ + --name 'flux-dev' \ + --output_dir 'examples/edit-result/nobel/'