Text-to-Image
Diffusers
Safetensors
PommesPeter commited on
Commit
c33ccf0
Β·
2 Parent(s): e548796 5441dab

Merge branch 'main' of https://huggingface.co/Alpha-VLLM/Lumina-Next-SFT-diffusers into main

Browse files
Files changed (1) hide show
  1. README.md +21 -147
README.md CHANGED
@@ -27,7 +27,7 @@ Our generative model has `Next-DiT` as the backbone, the text encoder is the `Ge
27
 
28
  ## πŸ“° News
29
 
30
- - [2024-06-21] πŸŽ‰πŸŽ‰πŸŽ‰ We have supported diffusers to load the `Lumina-Next-SFT` model. https://huggingface.co/Alpha-VLLM/Lumina-Next-SFT-diffusers
31
 
32
  - [2024-06-08] πŸŽ‰πŸŽ‰πŸŽ‰ We have released the `Lumina-Next-SFT` model.
33
 
@@ -43,180 +43,54 @@ More checkpoints of our model will be released soon~
43
 
44
  | Resolution | Next-DiT Parameter| Text Encoder | Prediction | Download URL |
45
  | ---------- | ----------------------- | ------------ | -----------|-------------- |
46
- | 1024 | 2B | [Gemma-2B](https://huggingface.co/google/gemma-2b) | Rectified Flow | [hugging face](https://huggingface.co/Alpha-VLLM/Lumina-Next-SFT) |
47
 
48
  ## Installation
49
 
50
- Before installation, ensure that you have a working ``nvcc``
51
-
52
- ```bash
53
- # The command should work and show the same version number as in our case. (12.1 in our case).
54
- nvcc --version
55
- ```
56
 
57
- On some outdated distros (e.g., CentOS 7), you may also want to check that a late enough version of
58
- ``gcc`` is available
59
 
60
  ```bash
61
- # The command should work and show a version of at least 6.0.
62
- # If not, consult distro-specific tutorials to obtain a newer version or build manually.
63
- gcc --version
64
  ```
65
 
66
- Downloading Lumina-T2X repo from GitHub:
67
 
68
  ```bash
69
- git clone https://github.com/Alpha-VLLM/Lumina-T2X
70
  ```
71
 
72
- ### 1. Create a conda environment and install PyTorch
73
-
74
- Note: You may want to adjust the CUDA version [according to your driver version](https://docs.nvidia.com/deploy/cuda-compatibility/#default-to-minor-version).
75
-
76
- ```bash
77
- conda create -n Lumina_T2X -y
78
- conda activate Lumina_T2X
79
- conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
80
- ```
81
-
82
- ### 2. Install dependencies
83
-
84
- ```bash
85
- pip install diffusers fairscale accelerate tensorboard transformers gradio torchdiffeq click
86
- ```
87
-
88
- or you can use
89
-
90
- ```bash
91
- cd lumina_next_t2i
92
- pip install -r requirements.txt
93
- ```
94
-
95
  ### 3. Install ``flash-attn``
96
 
97
- ```bash
98
- pip install flash-attn --no-build-isolation
99
- ```
100
-
101
- ### 4. Install [nvidia apex](https://github.com/nvidia/apex) (optional)
102
-
103
- >[!Warning]
104
- > While Apex can improve efficiency, it is *not* a must to make Lumina-T2X work.
105
- >
106
- > Note that Lumina-T2X works smoothly with either:
107
- > + Apex not installed at all; OR
108
- > + Apex successfully installed with CUDA and C++ extensions.
109
- >
110
- > However, it will fail when:
111
- > + A Python-only build of Apex is installed.
112
- >
113
- > If the error `No module named 'fused_layer_norm_cuda'` appears, it typically means you are using a Python-only build of Apex. To resolve this, please run `pip uninstall apex`, and Lumina-T2X should then function correctly.
114
-
115
- You can clone the repo and install following the official guidelines (note that we expect a full
116
- build, i.e., with CUDA and C++ extensions)
117
-
118
  ```bash
119
- pip install ninja
120
- git clone https://github.com/NVIDIA/apex
121
- cd apex
122
- # if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key...
123
- pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
124
- # otherwise
125
- pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
126
  ```
127
 
128
  ## Inference
129
 
130
- To ensure that our generative model is ready to use right out of the box, we provide a user-friendly CLI program and a locally deployable Web Demo site.
131
-
132
- ### CLI
133
-
134
- 1. Install Lumina-Next-T2I
135
-
136
- ```bash
137
- pip install -e .
138
- ```
139
 
140
- 2. Prepare the pre-trained model
141
 
142
  ⭐⭐ (Recommended) you can use huggingface_cli to download our model:
143
 
144
  ```bash
145
- huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-SFT --local-dir /path/to/ckpt
146
  ```
147
 
148
- or using git for cloning the model you want to use:
149
 
150
- ```bash
151
- git clone https://huggingface.co/Alpha-VLLM/Lumina-Next-T2I
152
- ```
153
-
154
- 1. Setting your personal inference configuration
155
-
156
- Update your own personal inference settings to generate different styles of images, checking `config/infer/config.yaml` for detailed settings. Detailed config structure:
157
-
158
- > `/path/to/ckpt` should be a directory containing `consolidated*.pth` and `model_args.pth`
159
-
160
- ```yaml
161
- - settings:
162
-
163
- model:
164
- ckpt: ""
165
- ckpt_lm: ""
166
- token: ""
167
-
168
- transport:
169
- path_type: "Linear" # option: ["Linear", "GVP", "VP"]
170
- prediction: "velocity" # option: ["velocity", "score", "noise"]
171
- loss_weight: "velocity" # option: [None, "velocity", "likelihood"]
172
- sample_eps: 0.1
173
- train_eps: 0.2
174
-
175
- ode:
176
- atol: 1e-6 # Absolute tolerance
177
- rtol: 1e-3 # Relative tolerance
178
- reverse: false # option: true or false
179
- likelihood: false # option: true or false
180
-
181
- infer:
182
- resolution: "1024x1024" # option: ["1024x1024", "512x2048", "2048x512", "(Extrapolation) 1664x1664", "(Extrapolation) 1024x2048", "(Extrapolation) 2048x1024"]
183
- num_sampling_steps: 60 # range: 1-1000
184
- cfg_scale: 4. # range: 1-20
185
- solver: "euler" # option: ["euler", "dopri5", "dopri8"]
186
- t_shift: 4 # range: 1-20 (int only)
187
- scaling_method: "Time-aware" # option: ["Time-aware", "None"]
188
- scale_watershed: 0.3 # range: 0.0-1.0
189
- proportional_attn: true # option: true or false
190
- seed: 0 # rnage: any number
191
- ```
192
 
193
- 1. Run with CLI
194
 
195
- inference command:
196
- ```bash
197
- lumina_next infer -c <config_path> <caption_here> <output_dir>
198
- ```
199
 
200
- e.g. Demo command:
201
-
202
- ```bash
203
- cd lumina_next_t2i
204
- lumina_next infer -c "config/infer/settings.yaml" "a snowman of ..." "./outputs"
205
  ```
206
-
207
- ### Web Demo
208
-
209
- To host a local gradio demo for interactive inference, run the following command:
210
-
211
- ```bash
212
- # `/path/to/ckpt` should be a directory containing `consolidated*.pth` and `model_args.pth`
213
-
214
- # default
215
- python -u demo.py --ckpt "/path/to/ckpt"
216
-
217
- # the demo by default uses bf16 precision. to switch to fp32:
218
- python -u demo.py --ckpt "/path/to/ckpt" --precision fp32
219
-
220
- # use ema model
221
- python -u demo.py --ckpt "/path/to/ckpt" --ema
222
- ```
 
27
 
28
  ## πŸ“° News
29
 
30
+ - [2024-06-23] πŸŽ‰πŸŽ‰πŸŽ‰ We have supported diffusers to load the `Lumina-Next-SFT` model. https://huggingface.co/Alpha-VLLM/Lumina-Next-SFT-diffusers
31
 
32
  - [2024-06-08] πŸŽ‰πŸŽ‰πŸŽ‰ We have released the `Lumina-Next-SFT` model.
33
 
 
43
 
44
  | Resolution | Next-DiT Parameter| Text Encoder | Prediction | Download URL |
45
  | ---------- | ----------------------- | ------------ | -----------|-------------- |
46
+ | 1024 | 2B | [Gemma-2B](https://huggingface.co/google/gemma-2b) | Rectified Flow | [hugging face](https://huggingface.co/Alpha-VLLM/Lumina-Next-SFT-diffusers) |
47
 
48
  ## Installation
49
 
50
+ ### 1. Create a conda environment and install PyTorch
 
 
 
 
 
51
 
52
+ Note: You may want to adjust the CUDA version [according to your driver version](https://docs.nvidia.com/deploy/cuda-compatibility/#default-to-minor-version).
 
53
 
54
  ```bash
55
+ conda create -n Lumina_T2X -y
56
+ conda activate Lumina_T2X
57
+ conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
58
  ```
59
 
60
+ ### 2. Install dependencies
61
 
62
  ```bash
63
+ pip install diffusers huggingface_hub
64
  ```
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ### 3. Install ``flash-attn``
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ```bash
69
+ pip install flash-attn --no-build-isolation
 
 
 
 
 
 
70
  ```
71
 
72
  ## Inference
73
 
 
 
 
 
 
 
 
 
 
74
 
75
+ 1. Prepare the pre-trained model
76
 
77
  ⭐⭐ (Recommended) you can use huggingface_cli to download our model:
78
 
79
  ```bash
80
+ huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-SFT-diffusers --local-dir /path/to/ckpt
81
  ```
82
 
83
+ 2. Run with demo code:
84
 
85
+ ```python
86
+ from diffusers import LuminaText2ImgPipeline
87
+ import torch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
+ pipeline = LuminaText2ImgPipeline.from_pretrained("/path/to/ckpt/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).to("cuda")
90
 
91
+ # or you can download the model using code directly
92
+ # pipeline = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).to("cuda")
 
 
93
 
94
+ image = pipeline(prompt="Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. "
95
+ "Background shows an industrial revolution cityscape with smoky skies and tall, metal structures").images[0]
 
 
 
96
  ```