File size: 6,601 Bytes
2381485
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fc01db8
2381485
 
 
 
 
 
 
 
 
 
 
 
afa5809
 
 
4756440
 
afa5809
 
 
 
 
 
 
 
 
 
 
 
 
4756440
2381485
 
afa5809
 
2381485
afa5809
2381485
afa5809
 
2381485
afa5809
 
 
 
 
2381485
afa5809
2381485
afa5809
 
2381485
afa5809
 
 
 
 
2381485
afa5809
 
 
 
 
2381485
 
 
06e3d5d
 
 
 
fc01db8
 
 
 
 
 
 
 
06e3d5d
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: other
license_name: kohaku-license-1.0
datasets:
- laion/conceptual-captions-12m-webdataset
- CaptionEmporium/coyo-hd-11m-llavanext
- KBlueLeaf/danbooru2023-metadata-database
- graph-based-captions/GBC10M
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# TIPO: Text to Image with text presampling for Prompt Optimization

500M LLaMA arch model trained for TIPO.<br>
Tech Report: https://arxiv.org/abs/2411.08127

![image/png](https://cdn-uploads.huggingface.co/production/uploads/630593e2fca1d8d92b81d2a1/fc9ovmARapQmgq9DZ7ApJ.png)

## Introduction

In this project, we introduce "TIPO" (**T**ext to **I**mage with text presampling for **P**rompt **O**ptimization), an innovative framework designed to significantly enhance the quality and usability of Text-to-Image (T2I) generative models. TIPO utilizes the Large Language Models (LLMs) to perform "Text Presampling" within the inference pipeline of text-to-image generative modeling. By refining and extending user input prompts, TIPO enables generative models to produce superior results with minimal user effort, making T2I systems more accessible and effective for a wider range of users.

## Usage
Use updated version of DTG extension (renamed to z-tipo-extension), current version of z-tipo-extension support stable-diffusion-webui, stable-diffusion-webui-forge and ComfyUI. SD-Next haven't been tested.
https://github.com/KohakuBlueleaf/z-tipo-extension

## Model arch and Training

This model is LLaMA arch with 200M parameters, the training data is combined version of Danbooru2023, Coyo-HD-11M. <br>
The total token seen is around 50B tokens. <br>
For more information please refer to the tech report and following table.

|                   | TIPO-200M                                                                      | TIPO-200M-ft                       | TIPO-500M                                                                      |
| ----------------- | ------------------------------------------------------------------------------ | ---------------------------------- | ------------------------------------------------------------------------------ |
| Arch              | LLaMA                                                                          | LLaMA                              | LLaMA                                                                          |
| Max ctx length    | 1024                                                                           | 1024                               | 1024                                                                           |
| Batch Size        | 2048                                                                           | 2048                               | 3584                                                                           |
| Training dataset  | Danbooru, GBC10M, 5epoch<br />Danbooru, GBC10M, Coyo11M, 3epoch              | Danbooru(pixtral), Coyo11M, 2epoch | Danbooru, GBC10M, Coyo11M, 5epoch                                            |
| Real Token Seen*  | 40B token                                                                      | 50B (10B more from TIPO-200M)     | 30B token                                                                      |
| Training Hardware | RTX 3090 x 4                                                                   | RTX 3090 x 4                       | H100 x 8                                                                       |
| Training Time     | 420 hour`                                                                      | 120 hour`                          | 100 hour`                                                                      |
| Huggingface       | [KBlueLeaf/TIPO-200M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M) | [KBlueLeaf/TIPO-200M-ft · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M-ft)  | You Are HERE |

*: We only count "non-padding token" in the token seen, since all the training data have very large length range. <br>
`: Since the training data is pretty short, it cost more time to reach same token seen than general LLM pretraining. <br>
As reference, with 4096 as max ctx length and almost all the data have reach that length, you may only need 2days to reach 10B token seen on RTX 3090 x 4 with 200M model.

### Evaluation
**Evaluation are done on TIPO-200M model** <br>
We have tested TIPO compared to other Model in several test and metrics:

#### Scenery tag test

In this test we use single "scenery" tag as input. (With some certain meta) <br>
To test each prompt gen method to see if they can obtain the desired distribution of outputs while maintain the quality of images.

| Scenery Tag Test | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
| ---- | ---- | ---- | ---- | ---- | ---- |
|   FDD ↓         |   0.3558   |   0.5414   |   0.3247   |   *0.2350*   |   **0.2282**   |
|   Aesthetic ↑   |   5.0569   |   **6.3676**   |   6.1609   |   5.9468   |   *6.2571*   |
|   AI Corrupt ↑  |   0.4257   |   *0.7490*   |   0.5024   |   0.5669   |   **0.9195**   |

#### Short/Truncated Long test

In this test we use short caption or manually truncated caption from GBC10M and CoyoHD11M. <br>
This test examine the ability of prompt gen method on handling almostly completed prompts.

| Short | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
| ---- | ---- | ---- | ---- | ---- | ---- |
| FDD ↓ | 0.0957 | 0.1668 | *0.0980* | 0.1783 | 0.1168 |
| Aesthetic ↑ | 5.8370 | **6.0589** | 5.8213 | 5.7963 | *5.8531* |
| AI Corrupt ↑ | 0.7113 | 0.6985 | 0.7064 | 0.6314 | **0.7131** |

| Truncated Long | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
| ---- | ---- | ---- | ---- | ---- | ---- |
| FDD ↓ | 0.0955 | 0.1683 | *0.1247* | 0.2096 | 0.1210 |
| Aesthetic ↑ | 5.7497 | **6.0168** | 5.8191 | 5.7759 | *5.8364* |
| AI Corrupt ↑ | 0.6868 | 0.6712 | 0.6741 | 0.5925 | **0.7130** |

## LICENSE
This model is released under [Kohaku License 1.0](https://kblueleaf.net/documents/kohaku-license/?[Your%20Organization/Name]=KohakuBlueLeaf&[Year]=2024)<br>
You can check the above provided URL or check the LICENSE file in this repo.

### Citation
```bibtex
@misc{yeh2024tipotextimagetext,
      title={TIPO: Text to Image with Text Presampling for Prompt Optimization}, 
      author={Shih-Ying Yeh and Sang-Hyun Park and Giyeong Oh and Min Song and Youngjae Yu},
      year={2024},
      eprint={2411.08127},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.08127}, 
}
```