Image-to-Image
Diffusers
English
File size: 3,175 Bytes
f8aeda2
 
 
 
6164316
f8aeda2
 
 
 
 
 
 
f4a9a8b
 
 
 
 
 
 
 
 
 
 
 
5bb75f9
 
f4a9a8b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dabceae
f4a9a8b
 
 
1b56e27
 
 
 
 
 
 
 
f4a9a8b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
datasets:
- CaptionEmporium/coyo-hd-11m-llavanext
- CortexLM/midjourney-v6
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
new_version: XLabs-AI/flux-ip-adapter
pipeline_tag: image-to-image
library_name: diffusers
---

![Banner Picture 1](assets/banner-dark.png?raw=true)
[<img src="https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/join-our-discord-rev1.png?raw=true">](https://discord.gg/FHY2guThfy)
![Mona Anime Workflow 1](assets/mona_workflow.jpg?raw=true)

This repository provides a IP-Adapter checkpoint for
[FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs

[See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.

# Models
The IP adapter is trained on a resolution of 512x512 for 150k steps and 1024x1024 for 350k steps while maintaining the aspect ratio.
We release **v2 version** - which can be used directly in ComfyUI!   

Please, see our [ComfyUI custom nodes installation guide](https://github.com/XLabs-AI/x-flux-comfyui)

# Examples

See examples of our models results below.  
Also, some generation results with input images are provided in "Files and versions"

# Inference

To try our models, you have 2 options:
1. Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
2. Use our custom nodes for ComfyUI and test it with provided workflows (check out folder /workflows)

## Instruction for ComfyUI 
1. Go to ComfyUI/custom_nodes
2. Clone [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui.git), path should be ComfyUI/custom_nodes/x-flux-comfyui/*, where * is all the files in this repo
3. Go to ComfyUI/custom_nodes/x-flux-comfyui/ and run python setup.py
4. Update x-flux-comfy with `git pull` or reinstall it.
5. Download Clip-L `model.safetensors` from [OpenAI VIT CLIP large](https://huggingface.co/openai/clip-vit-large-patch14), and put it to `ComfyUI/models/clip_vision/*`.
6. Download our IPAdapter from [huggingface](https://huggingface.co/XLabs-AI/flux-ip-adapter/tree/main), and put it to `ComfyUI/models/xlabs/ipadapters/*`.
7. Use `Flux Load IPAdapter` and `Apply Flux IPAdapter` nodes, choose right CLIP model and enjoy your genereations.
8. You can find example workflow in folder workflows in this repo.

If you get bad results, try to set to play with ip strength
### Limitations
The IP Adapter is currently in beta.
We do not guarantee that you will get a good result right away, it may take more attempts to get a result. 
<img src="assets/ip_adapter_2.jpg?raw=true" alt="example_2" style="width:1024px;"/>
<img src="assets/ip_adapter_3.jpg?raw=true" alt="example_3" style="width:1024px;"/>
<img src="assets/ip_adapter_1.jpg?raw=true" alt="example_1" style="width:1024px;"/>
<img src="assets/ip_adapter_4.jpg?raw=true" alt="example_4" style="width:1024px;"/>
<img src="assets/ip_adapter_5.jpg?raw=true" alt="example_5" style="width:1024px;"/>
<img src="assets/ip_adapter_6.jpg?raw=true" alt="example_6" style="width:1024px;"/>


## License
Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/>