Zhiminli commited on
Commit
3b2d1c3
1 Parent(s): 1281794

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -11,12 +11,12 @@ Language: **English**
11
 
12
  ## Instructions
13
 
14
- The dependencies and installation are basically the same as the [**original model**](https://huggingface.co/Tencent-Hunyuan/HunyuanDiT-v1.2).
15
 
16
  We provide two types of trained LoRA weights for you to test.
17
 
18
  Then download the model using the following commands:
19
-
20
  ```bash
21
  cd HunyuanDiT
22
  # Use the huggingface-cli tool to download the model.
@@ -26,6 +26,7 @@ huggingface-cli download Tencent-Hunyuan/HYDiT-LoRA --local-dir ./ckpts/t2i/lora
26
  python sample_t2i.py --prompt "青花瓷风格,一只猫在追蝴蝶" --no-enhance --load-key ema --lora-ckpt ./ckpts/t2i/lora/porcelain --infer-mode fa
27
  ```
28
 
 
29
  ## Training
30
 
31
  We provide three types of weights for fine-tuning LoRA, `ema`, `module` and `distill`, and you can choose according to the actual effect. By default, we use `ema` weights.
@@ -36,7 +37,6 @@ If multiple resolution are used, you need to add the `--multireso` and `--reso-s
36
 
37
  If you want to train LoRA with HunYuanDiT v1.1, you could add `--use-style-cond`, `--size-cond 1024 1024` and `--beta-end 0.03`.
38
 
39
-
40
  ```bash
41
  model='DiT-g/2' # model type
42
  task_flag="lora_porcelain_ema_rank64" # task flag
 
11
 
12
  ## Instructions
13
 
14
+ The dependencies and installation are basically the same as the [**base model**](https://huggingface.co/Tencent-Hunyuan/HunyuanDiT-v1.2).
15
 
16
  We provide two types of trained LoRA weights for you to test.
17
 
18
  Then download the model using the following commands:
19
+
20
  ```bash
21
  cd HunyuanDiT
22
  # Use the huggingface-cli tool to download the model.
 
26
  python sample_t2i.py --prompt "青花瓷风格,一只猫在追蝴蝶" --no-enhance --load-key ema --lora-ckpt ./ckpts/t2i/lora/porcelain --infer-mode fa
27
  ```
28
 
29
+
30
  ## Training
31
 
32
  We provide three types of weights for fine-tuning LoRA, `ema`, `module` and `distill`, and you can choose according to the actual effect. By default, we use `ema` weights.
 
37
 
38
  If you want to train LoRA with HunYuanDiT v1.1, you could add `--use-style-cond`, `--size-cond 1024 1024` and `--beta-end 0.03`.
39
 
 
40
  ```bash
41
  model='DiT-g/2' # model type
42
  task_flag="lora_porcelain_ema_rank64" # task flag