File size: 4,024 Bytes
a80889a
 
93855f7
 
 
 
 
 
 
 
 
a80889a
93855f7
 
ea91f46
93855f7
 
 
 
 
 
 
 
90b49f4
f777431
93855f7
 
 
 
889aa6e
93855f7
 
 
 
 
ea91f46
93855f7
 
 
 
 
ea91f46
93855f7
ea91f46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93855f7
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: apache-2.0
language:
- en
library_name: diffusers
tags:
- text-to-image
- prior
- eclipse
- unclip
- kandinskyv2.2
---

# Introduction
<a href="https://colab.research.google.com/drive/1VcqzXZmilntec3AsIyzCqlstEhX4Pa1o?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

The &lambda;-ECLIPSE model is a light weight support for multi-concept personalization. &lambda;-ECLIPSE is tiny T2I prior model designed for Kandinsky v2.2 diffusion image generator.

&lambda;-ECLIPSE model extends the [ECLIPSE-Prior](https://huggingface.co/ECLIPSE-Community/ECLIPSE_KandinskyV22_Prior)  via incorporating the image-text interleaved data.

&lambda;-ECLIPSE shows that we do not need to train the Personalized T2I (P-T2I) models on lot of resources. For instance, &lambda;-ECLIPSE is trained on mere 74 GPU Hours (A100) compared to it's couterparts BLIP-Diffusion (2304 GPU hours) and Kosmos-G (12300 GPU hours).

- **Project Page:** [https://eclipse-t2i.github.io/Lambda-ECLIPSE/](https://eclipse-t2i.github.io/Lambda-ECLIPSE/)
- **GitHub:** [https://github.com/Maitreyapatel/lambda-eclipse-inference](https://github.com/Maitreyapatel/lambda-eclipse-inference)
- **Paper (arXiv):** [https://arxiv.org/abs/2402.05195](https://arxiv.org/abs/2402.05195)

Importantly, &lambda;-ECLIPSE works in pure CLIP latent space without any additional information. Hence, it's performance can be easily imporved via test-time adaption to increase the concept alignment while having solid composition alignment.


![Qualitative example](./overview.png)

More examples at: [Gallery](https://eclipse-t2i.github.io/Lambda-ECLIPSE/gallery.html)

## Installation
```bash
git clone https://github.com/eclipse-t2i/lambda-eclipse-inference.git
conda create -p ./venv python=3.9
pip install -r requirements.txt
```

## Run Inference
<a href="https://colab.research.google.com/drive/1VcqzXZmilntec3AsIyzCqlstEhX4Pa1o?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

```bash
import os
import torch
from transformers import (
    CLIPTextModelWithProjection,
    CLIPTokenizer,
)
from src.pipelines.pipeline_kandinsky_subject_prior import KandinskyPriorPipeline
from src.priors.lambda_prior_transformer import PriorTransformer
from diffusers import DiffusionPipeline

text_encoder = CLIPTextModelWithProjection.from_pretrained(
    "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k",
    projection_dim=1280,
    torch_dtype=torch.float32,
)
tokenizer = CLIPTokenizer.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k")

prior = PriorTransformer.from_pretrained("ECLIPSE-Community/Lambda-ECLIPSE-Prior-v1.0")
pipe_prior = KandinskyPriorPipeline.from_pretrained(
    "kandinsky-community/kandinsky-2-2-prior",
    prior=prior,
    text_encoder=text_encoder,
    tokenizer=tokenizer,
).to("cuda")

pipe = DiffusionPipeline.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder"
).to("cuda")

raw_data = {
    "prompt": args.prompt,
    "subject_images": [args.subject1_path, args.subject2_path],
    "subject_keywords": [args.subject1_name, args.subject2_name]
}
image_emb, negative_image_emb = pipe_prior(
    raw_data=raw_data,
).to_tuple()
image = pipe(
    image_embeds=image_emb,
    negative_image_embeds=negative_image_emb,
    num_inference_steps=50,
    guidance_scale=7.5,
).images

image[0]
```

## Important Notes (and limitations):

- &lambda;-ECLIPSE is trained to support upto four unique concepts, however, this version is trained on biased datasets heavily focusing on single and two subjects. Therefore, it maynot perform expectadly as number of subjects increases.  
- As this model is trained for P-T2I specifically, it might not perform well on traditional T2I task.
- &lambda;-ECLIPSE achieves SOTA compositional performance on composition alignment while maintaining the concept alignment. However, there is still a big gap compared to the finetuning based methodologies.