SaiLochana
commited on
Commit
•
14eca61
1
Parent(s):
bfbb419
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## VITON-HD — Official PyTorch Implementation
|
2 |
+
|
3 |
+
**\*\*\*\*\* New follow-up research by our team is available at https://github.com/rlawjdghek/StableVITON \*\*\*\*\***<br>
|
4 |
+
|
5 |
+
|
6 |
+
![Teaser image](./assets/teaser.png)
|
7 |
+
|
8 |
+
> **VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization**<br>
|
9 |
+
> [Seunghwan Choi](https://github.com/shadow2496)\*<sup>1</sup>, [Sunghyun Park](https://psh01087.github.io)\*<sup>1</sup>, [Minsoo Lee](https://github.com/Minsoo2022)\*<sup>1</sup>, [Jaegul Choo](https://sites.google.com/site/jaegulchoo)<sup>1</sup><br>
|
10 |
+
> <sup>1</sup>KAIST<br>
|
11 |
+
> In CVPR 2021. (* indicates equal contribution)
|
12 |
+
|
13 |
+
> Paper: https://arxiv.org/abs/2103.16874<br>
|
14 |
+
> Project page: https://psh01087.github.io/VITON-HD
|
15 |
+
|
16 |
+
> **Abstract:** *The task of image-based virtual try-on aims to transfer a target clothing item onto the corresponding region of a person, which is commonly tackled by fitting the item to the desired body part and fusing the warped item with the person. While an increasing number of studies have been conducted, the resolution of synthesized images is still limited to low (e.g., 256x192), which acts as the critical limitation against satisfying online consumers. We argue that the limitation stems from several challenges: as the resolution increases, the artifacts in the misaligned areas between the warped clothes and the desired clothing regions become noticeable in the final results; the architectures used in existing methods have low performance in generating high-quality body parts and maintaining the texture sharpness of the clothes. To address the challenges, we propose a novel virtual try-on method called VITON-HD that successfully synthesizes 1024x768 virtual try-on images. Specifically, we first prepare the segmentation map to guide our virtual try-on synthesis, and then roughly fit the target clothing item to a given person's body. Next, we propose ALIgnment-Aware Segment (ALIAS) normalization and ALIAS generator to handle the misaligned areas and preserve the details of 1024x768 inputs. Through rigorous comparison with existing methods, we demonstrate that VITON-HD highly surpasses the baselines in terms of synthesized image quality both qualitatively and quantitatively.*
|
17 |
+
|
18 |
+
## Notice
|
19 |
+
|
20 |
+
ECCV 2022 paper by our team (follow-up research): https://github.com/sangyun884/HR-VITON
|
21 |
+
Preprocessing codes for person-agnostic representation are available at https://github.com/sangyun884/HR-VITON.
|
22 |
+
|
23 |
+
## Installation
|
24 |
+
|
25 |
+
Clone this repository:
|
26 |
+
|
27 |
+
```
|
28 |
+
git clone https://github.com/shadow2496/VITON-HD.git
|
29 |
+
cd ./VITON-HD/
|
30 |
+
```
|
31 |
+
|
32 |
+
Install PyTorch and other dependencies:
|
33 |
+
|
34 |
+
```
|
35 |
+
conda create -y -n [ENV] python=3.8
|
36 |
+
conda activate [ENV]
|
37 |
+
conda install -y pytorch=[>=1.6.0] torchvision cudatoolkit=[>=9.2] -c pytorch
|
38 |
+
pip install opencv-python torchgeometry
|
39 |
+
```
|
40 |
+
|
41 |
+
## Dataset
|
42 |
+
|
43 |
+
We collected 1024 x 768 virtual try-on dataset for **our research purpose only**.
|
44 |
+
You can download a preprocessed dataset from [VITON-HD DropBox](https://www.dropbox.com/s/10bfat0kg4si1bu/zalando-hd-resized.zip?dl=0).
|
45 |
+
The frontal view woman and top clothing image pairs are split into a training and a test set with 11,647 and 2,032 pairs, respectively.
|
46 |
+
|
47 |
+
|
48 |
+
## Pre-trained networks
|
49 |
+
|
50 |
+
We provide pre-trained networks and sample images from the test dataset. Please download `*.pkl` and test images from the [VITON-HD Google Drive folder](https://drive.google.com/drive/folders/0B8kXrnobEVh9fnJHX3lCZzEtd20yUVAtTk5HdWk2OVV0RGl6YXc0NWhMOTlvb1FKX3Z1OUk?resourcekey=0-OIXHrDwCX8ChjypUbJo4fQ&usp=sharing) and unzip `*.zip` files. `test.py` assumes that the downloaded files are placed in `./checkpoints/` and `./datasets/` directories.
|
51 |
+
|
52 |
+
## Testing
|
53 |
+
|
54 |
+
To generate virtual try-on images, run:
|
55 |
+
|
56 |
+
```
|
57 |
+
CUDA_VISIBLE_DEVICES=[GPU_ID] python test.py --name [NAME]
|
58 |
+
```
|
59 |
+
|
60 |
+
The results are saved in the `./results/` directory. You can change the location by specifying the `--save_dir` argument. To synthesize virtual try-on images with different pairs of a person and a clothing item, edit `./datasets/test_pairs.txt` and run the same command.
|
61 |
+
|
62 |
+
## License
|
63 |
+
|
64 |
+
All material is made available under [Creative Commons BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). You can **use, redistribute, and adapt** the material for **non-commercial purposes**, as long as you give appropriate credit by **citing our paper** and **indicate any changes** that you've made.
|
65 |
+
|
66 |
+
## Citation
|
67 |
+
|
68 |
+
If you find this work useful for your research, please cite our paper:
|
69 |
+
|
70 |
+
```
|
71 |
+
@inproceedings{choi2021viton,
|
72 |
+
title={VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization},
|
73 |
+
author={Choi, Seunghwan and Park, Sunghyun and Lee, Minsoo and Choo, Jaegul},
|
74 |
+
booktitle={Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)},
|
75 |
+
year={2021}
|
76 |
+
}
|
77 |
+
```
|