Yuliang commited on
Commit
df6cc56
1 Parent(s): f4e5ac5

gradio init

Browse files
.gitignore CHANGED
@@ -17,4 +17,5 @@ dist
17
  *egg-info
18
  *.so
19
  run.sh
20
- *.log
 
 
17
  *egg-info
18
  *.so
19
  run.sh
20
+ *.log
21
+ gradio_cached_examples/
README.md CHANGED
@@ -1,211 +1,10 @@
1
- <!-- PROJECT LOGO -->
2
-
3
- <p align="center">
4
-
5
- <h1 align="center">ECON: Explicit Clothed humans Optimized via Normal integration</h1>
6
- <p align="center">
7
- <a href="http://xiuyuliang.cn/"><strong>Yuliang Xiu</strong></a>
8
- ·
9
- <a href="https://ps.is.tuebingen.mpg.de/person/jyang"><strong>Jinlong Yang</strong></a>
10
- ·
11
- <a href="https://hoshino042.github.io/homepage/"><strong>Xu Cao</strong></a>
12
- ·
13
- <a href="https://ps.is.mpg.de/~dtzionas"><strong>Dimitrios Tzionas</strong></a>
14
- ·
15
- <a href="https://ps.is.tuebingen.mpg.de/person/black"><strong>Michael J. Black</strong></a>
16
- </p>
17
- <h2 align="center">CVPR 2023 (Highlight)</h2>
18
- <div align="center">
19
- <img src="./assets/teaser.gif" alt="Logo" width="100%">
20
- </div>
21
-
22
- <p align="center">
23
- <br>
24
- <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
25
- <a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
26
- <a href="https://cupy.dev/"><img alt="cupy" src="https://img.shields.io/badge/-Cupy-46C02B?logo=numpy&logoColor=white"></a>
27
- <a href="https://twitter.com/yuliangxiu"><img alt='Twitter' src="https://img.shields.io/twitter/follow/yuliangxiu?label=%40yuliangxiu"></a>
28
- <a href="https://discord.gg/Vqa7KBGRyk"><img alt="discord invitation link" src="https://dcbadge.vercel.app/api/server/Vqa7KBGRyk?style=flat"></a>
29
- <br></br>
30
- <a href='https://colab.research.google.com/drive/1YRgwoRCZIrSB2e7auEWFyG10Xzjbrbno?usp=sharing'><img src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a>
31
- <a href='https://github.com/YuliangXiu/ECON/blob/master/docs/installation-docker.md'><img src='https://img.shields.io/badge/Docker-9cf.svg?logo=Docker' alt='Docker'></a>
32
- <a href='https://carlosedubarreto.gumroad.com/l/CEB_ECON'><img src='https://img.shields.io/badge/Blender-F6DDCC.svg?logo=Blender' alt='Blender'></a>
33
- <br></br>
34
- <a href="https://arxiv.org/abs/2212.07422">
35
- <img src='https://img.shields.io/badge/Paper-PDF-green?style=for-the-badge&logo=adobeacrobatreader&logoWidth=20&logoColor=white&labelColor=66cc00&color=94DD15' alt='Paper PDF'>
36
- </a>
37
- <a href='https://xiuyuliang.cn/econ/'>
38
- <img src='https://img.shields.io/badge/ECON-Page-orange?style=for-the-badge&logo=Google%20chrome&logoColor=white&labelColor=D35400' alt='Project Page'></a>
39
- <a href="https://youtu.be/j5hw4tsWpoY"><img alt="youtube views" title="Subscribe to my YouTube channel" src="https://img.shields.io/youtube/views/j5hw4tsWpoY?logo=youtube&labelColor=ce4630&style=for-the-badge"/></a>
40
- </p>
41
- </p>
42
-
43
- <br/>
44
-
45
- ECON is designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with **loose clothing** or in **challenging poses**. ECON also supports **multi-person reconstruction** and **SMPL-X based animation**.
46
- <br/>
47
- <br/>
48
-
49
- ## News :triangular_flag_on_post:
50
-
51
- - [2023/02/27] ECON got accepted by CVPR 2023 as Highlight (top 10%)!
52
- - [2023/01/12] [Carlos Barreto](https://twitter.com/carlosedubarret/status/1613252471035494403) creates a Blender Addon ([Download](https://carlosedubarreto.gumroad.com/l/CEB_ECON), [Tutorial](https://youtu.be/sbWZbTf6ZYk)).
53
- - [2023/01/08] [Teddy Huang](https://github.com/Teddy12155555) creates [install-with-docker](docs/installation-docker.md) for ECON .
54
- - [2023/01/06] [Justin John](https://github.com/justinjohn0306) and [Carlos Barreto](https://github.com/carlosedubarreto) creates [install-on-windows](docs/installation-windows.md) for ECON .
55
- - [2022/12/22] <a href='https://colab.research.google.com/drive/1YRgwoRCZIrSB2e7auEWFyG10Xzjbrbno?usp=sharing' style='padding-left: 0.5rem;'><img src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a> is now available, created by [Aron Arzoomand](https://github.com/AroArz).
56
- - [2022/12/15] Both <a href="#demo">demo</a> and <a href="https://arxiv.org/abs/2212.07422">arXiv</a> are available.
57
-
58
- ## TODO
59
-
60
- - [ ] Blender add-on for FBX export
61
- - [ ] Full RGB texture generation
62
-
63
- ## Key idea: d-BiNI
64
-
65
- d-BiNI jointly optimizes front-back 2.5D surfaces such that: (1) high-frequency surface details agree with normal maps, (2) low-frequency surface variations, including discontinuities, align with SMPL-X surfaces, and (3) front-back 2.5D surface silhouettes are coherent with each other.
66
-
67
- |Front-view|Back-view|Side-view|
68
- |:--:|:--:|:---:|
69
- |![](assets/front-45.gif)|![](assets/back-45.gif)|![](assets/double-90.gif)||
70
-
71
- <details><summary>Please consider cite <strong>BiNI</strong> if it also helps on your project</summary>
72
-
73
- ```bibtex
74
- @inproceedings{cao2022bilateral,
75
- title={Bilateral normal integration},
76
- author={Cao, Xu and Santo, Hiroaki and Shi, Boxin and Okura, Fumio and Matsushita, Yasuyuki},
77
- booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part I},
78
- pages={552--567},
79
- year={2022},
80
- organization={Springer}
81
- }
82
- ```
83
- </details>
84
-
85
- <br>
86
-
87
- <!-- TABLE OF CONTENTS -->
88
- <details open="open" style='padding: 10px; border-radius:5px 30px 30px 5px; border-style: solid; border-width: 1px;'>
89
- <summary>Table of Contents</summary>
90
- <ol>
91
- <li>
92
- <a href="#instructions">Instructions</a>
93
- </li>
94
- <li>
95
- <a href="#demo">Demo</a>
96
- </li>
97
- <li>
98
- <a href="#applications">Applications</a>
99
- </li>
100
- <li>
101
- <a href="#citation">Citation</a>
102
- </li>
103
- </ol>
104
- </details>
105
-
106
- <br/>
107
-
108
- ## Instructions
109
-
110
- - See [installion doc for Docker](docs/installation-docker.md) to run a docker container with pre-built image for ECON demo
111
- - See [installion doc for Windows](docs/installation-windows.md) to install all the required packages and setup the models on _Windows_
112
- - See [installion doc for Ubuntu](docs/installation-ubuntu.md) to install all the required packages and setup the models on _Ubuntu_
113
- - See [magic tricks](docs/tricks.md) to know a few technical tricks to further improve and accelerate ECON
114
- - See [testing](docs/testing.md) to prepare the testing data and evaluate ECON
115
-
116
- ## Demo
117
-
118
- ```bash
119
- # For single-person image-based reconstruction (w/ l visualization steps, 1.8min)
120
- python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results
121
-
122
- # For multi-person image-based reconstruction (see config/econ.yaml)
123
- python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi
124
-
125
- # To generate the demo video of reconstruction results
126
- python -m apps.multi_render -n <filename>
127
-
128
- # To animate the reconstruction with SMPL-X pose parameters
129
- python -m apps.avatarizer -n <filename>
130
- ```
131
-
132
- <br/>
133
-
134
- ## More Qualitative Results
135
-
136
- | ![OOD Poses](assets/OOD-poses.jpg) |
137
- | :------------------------------------: |
138
- | _Challenging Poses_ |
139
- | ![OOD Clothes](assets/OOD-outfits.jpg) |
140
- | _Loose Clothes_ |
141
-
142
- ## Applications
143
-
144
- | ![SHHQ](assets/SHHQ.gif) | ![crowd](assets/crowd.gif) |
145
- | :----------------------------------------------------------------------------------------------------: | :-----------------------------------------: |
146
- | _ECON could provide pseudo 3D GT for [SHHQ Dataset](https://github.com/stylegan-human/StyleGAN-Human)_ | _ECON supports multi-person reconstruction_ |
147
-
148
- <br/>
149
- <br/>
150
-
151
- ## Citation
152
-
153
- ```bibtex
154
- @inproceedings{xiu2023econ,
155
- title = {{ECON: Explicit Clothed humans Optimized via Normal integration}},
156
- author = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
157
- booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
158
- month = {June},
159
- year = {2023},
160
- }
161
- ```
162
-
163
- <br/>
164
-
165
- ## Acknowledgments
166
-
167
- We thank [Lea Hering](https://is.mpg.de/person/lhering) and [Radek Daněček](https://is.mpg.de/person/rdanecek) for proof reading, [Yao Feng](https://ps.is.mpg.de/person/yfeng), [Haven Feng](https://is.mpg.de/person/hfeng), and [Weiyang Liu](https://wyliu.com/) for their feedback and discussions, [Tsvetelina Alexiadis](https://ps.is.mpg.de/person/talexiadis) for her help with the AMT perceptual study.
168
-
169
- Here are some great resources we benefit from:
170
-
171
- - [ICON](https://github.com/YuliangXiu/ICON) for SMPL-X Body Fitting
172
- - [BiNI](https://github.com/hoshino042/bilateral_normal_integration) for Bilateral Normal Integration
173
- - [MonoPortDataset](https://github.com/Project-Splinter/MonoPortDataset) for Data Processing, [MonoPort](https://github.com/Project-Splinter/MonoPort) for fast implicit surface query
174
- - [rembg](https://github.com/danielgatis/rembg) for Human Segmentation
175
- - [MediaPipe](https://google.github.io/mediapipe/getting_started/python.html) for full-body landmark estimation
176
- - [PyTorch-NICP](https://github.com/wuhaozhe/pytorch-nicp) for non-rigid registration
177
- - [smplx](https://github.com/vchoutas/smplx), [PyMAF-X](https://www.liuyebin.com/pymaf-x/), [PIXIE](https://github.com/YadiraF/PIXIE) for Human Pose & Shape Estimation
178
- - [CAPE](https://github.com/qianlim/CAPE) and [THuman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) for Dataset
179
- - [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential Rendering
180
-
181
- Some images used in the qualitative examples come from [pinterest.com](https://www.pinterest.com/).
182
-
183
- This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 ([CLIPE Project](https://www.clipe-itn.eu)).
184
-
185
- ## Contributors
186
-
187
- Kudos to all of our amazing contributors! ECON thrives through open-source. In that spirit, we welcome all kinds of contributions from the community.
188
-
189
- <a href="https://github.com/yuliangxiu/ECON/graphs/contributors">
190
- <img src="https://contrib.rocks/image?repo=yuliangxiu/ECON" />
191
- </a>
192
-
193
- _Contributor avatars are randomly shuffled._
194
-
195
- ---
196
-
197
- <br>
198
-
199
- ## License
200
-
201
- This code and model are available for non-commercial scientific research purposes as defined in the [LICENSE](LICENSE) file. By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE).
202
-
203
- ## Disclosure
204
-
205
- MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a part-time employee of Meshcapade, his research was performed solely at, and funded solely by, the Max Planck Society.
206
-
207
- ## Contact
208
-
209
- For technical questions, please contact yuliang.xiu@tue.mpg.de
210
-
211
- For commercial licensing, please contact ps-licensing@tue.mpg.de
 
1
+ title: Fully-textured Clothed Human Digitization (ECON + TEXTure)
2
+ metaTitle: Avatarify yourself from single image, by Yuliang Xiu
3
+ emoji: 🤼
4
+ colorFrom: green
5
+ colorTo: pink
6
+ sdk: gradio
7
+ sdk_version: 3.27.0
8
+ app_file: app.py
9
+ pinned: true
10
+ python_version: 3.8.15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app.py ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # install
2
+
3
+ import glob
4
+ import gradio as gr
5
+ import os
6
+
7
+ import subprocess
8
+
9
+ if os.getenv('SYSTEM') == 'spaces':
10
+ # subprocess.run('pip install pyembree'.split())
11
+ subprocess.run(
12
+ 'pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu116_pyt1130/download.html'
13
+ .split()
14
+ )
15
+
16
+ from apps.infer import generate_model, generate_video
17
+
18
+ # running
19
+
20
+ description = '''
21
+ # Fully-textured Clothed Human Digitization (ECON + ControlNet)
22
+ ### ECON: Explicit Clothed humans Optimized via Normal integration (CVPR 2023, Highlight)
23
+
24
+ <table>
25
+ <th width="20%">
26
+ <ul>
27
+ <li><strong>Homepage</strong> <a href="https://econ.is.tue.mpg.de/">econ.is.tue.mpg.de</a></li>
28
+ <li><strong>Code</strong> <a href="https://github.com/YuliangXiu/ECON">YuliangXiu/ECON</a></li>
29
+ <li><strong>Paper</strong> <a href="https://arxiv.org/abs/2212.07422">arXiv</a>, <a href="https://readpaper.com/paper/4736821012688027649">ReadPaper</a></li>
30
+ <li><strong>Chatroom</strong> <a href="https://discord.gg/Vqa7KBGRyk">Discord</a></li>
31
+ </ul>
32
+ <br>
33
+ <ul>
34
+ <li><strong>Colab Notebook</strong> <a href='https://colab.research.google.com/drive/1YRgwoRCZIrSB2e7auEWFyG10Xzjbrbno?usp=sharing'><img style="display: inline-block;" src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a></li>
35
+ <li><strong>Blender Plugin</strong> <a href='https://carlosedubarreto.gumroad.com/l/CEB_ECON'><img style="display: inline-block;" src='https://img.shields.io/badge/Blender-F6DDCC.svg?logo=Blender' alt='Blender'></a></li>
36
+ <li><strong>Docker Image</strong> <a href='https://github.com/YuliangXiu/ECON/blob/master/docs/installation-docker.md'><img style="display: inline-block;" src='https://img.shields.io/badge/Docker-9cf.svg?logo=Docker' alt='Docker'></a></li>
37
+ <li><strong>Windows Setup</strong> <a href="https://github.com/YuliangXiu/ECON/blob/master/docs/installation-windows.md"><img style="display: inline-block;" src='https://img.shields.io/badge/Windows-00a2ed.svg?logo=Windows' akt='Windows'></a></li>
38
+ </ul>
39
+
40
+ <br>
41
+ <a href="https://twitter.com/yuliangxiu"><img alt="Twitter Follow" src="https://img.shields.io/twitter/follow/yuliangxiu?style=social"></a><br>
42
+ <iframe src="https://ghbtns.com/github-btn.html?user=yuliangxiu&repo=ECON&type=star&count=true&v=2&size=small" frameborder="0" scrolling="0" width="100" height="20"></iframe>
43
+ </th>
44
+ <th width="40%">
45
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/j5hw4tsWpoY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
46
+ </th>
47
+ <th width="40%">
48
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/sbWZbTf6ZYk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
49
+ </th>
50
+ </table>
51
+
52
+
53
+ #### Citation
54
+ ```
55
+ @inproceedings{xiu2023econ,
56
+ title = {{ECON: Explicit Clothed humans Optimized via Normal integration}},
57
+ author = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
58
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
59
+ month = {June},
60
+ year = {2023},
61
+ }
62
+ ```
63
+
64
+
65
+ <details>
66
+
67
+ <summary>More</summary>
68
+
69
+ #### Acknowledgments:
70
+ - [controlnet-openpose](https://huggingface.co/spaces/diffusers/controlnet-openpose)
71
+ - [TEXTure](https://huggingface.co/spaces/TEXTurePaper/TEXTure)
72
+
73
+
74
+ #### Image Credits
75
+
76
+ * [Pinterest](https://www.pinterest.com/search/pins/?q=parkour&rs=sitelinks_searchbox)
77
+
78
+ #### Related works
79
+
80
+ * [ICON @ MPI-IS](https://icon.is.tue.mpg.de/)
81
+ * [MonoPort @ USC](https://xiuyuliang.cn/monoport)
82
+ * [Phorhum @ Google](https://phorhum.github.io/)
83
+ * [PIFuHD @ Meta](https://shunsukesaito.github.io/PIFuHD/)
84
+ * [PaMIR @ Tsinghua](http://www.liuyebin.com/pamir/pamir.html)
85
+
86
+ </details>
87
+
88
+ <center>
89
+ <h2> Generate pose & prompt-guided images / Upload photos / Use examples &rarr; Submit Image (~2min) &rarr; Generate Video (~2min) </h2>
90
+ </center>
91
+ '''
92
+
93
+ from controlnet_aux import OpenposeDetector
94
+ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
95
+ from diffusers import UniPCMultistepScheduler
96
+ import gradio as gr
97
+ import torch
98
+ import base64
99
+ from io import BytesIO
100
+ from PIL import Image
101
+
102
+ # live conditioning
103
+ canvas_html = "<pose-canvas id='canvas-root' style='display:flex;max-width: 500px;margin: 0 auto;'></pose-canvas>"
104
+ load_js = """
105
+ async () => {
106
+ const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/pose-gradio.js"
107
+ fetch(url)
108
+ .then(res => res.text())
109
+ .then(text => {
110
+ const script = document.createElement('script');
111
+ script.type = "module"
112
+ script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
113
+ document.head.appendChild(script);
114
+ });
115
+ }
116
+ """
117
+ get_js_image = """
118
+ async (image_in_img, prompt, image_file_live_opt, live_conditioning) => {
119
+ const canvasEl = document.getElementById("canvas-root");
120
+ const data = canvasEl? canvasEl._data : null;
121
+ return [image_in_img, prompt, image_file_live_opt, data]
122
+ }
123
+ """
124
+
125
+ # Constants
126
+ low_threshold = 100
127
+ high_threshold = 200
128
+
129
+ # Models
130
+ pose_model = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
131
+ controlnet = ControlNetModel.from_pretrained(
132
+ "lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16
133
+ )
134
+ pipe = StableDiffusionControlNetPipeline.from_pretrained(
135
+ "runwayml/stable-diffusion-v1-5",
136
+ controlnet=controlnet,
137
+ safety_checker=None,
138
+ torch_dtype=torch.float16
139
+ )
140
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
141
+
142
+ # This command loads the individual model components on GPU on-demand. So, we don't
143
+ # need to explicitly call pipe.to("cuda").
144
+ pipe.enable_model_cpu_offload()
145
+
146
+ # xformers
147
+ pipe.enable_xformers_memory_efficient_attention()
148
+
149
+ # Generator seed,
150
+ generator = torch.manual_seed(0)
151
+
152
+
153
+ hint_prompts = '''
154
+ <strong>Hints</strong>: <br>
155
+ best quality, extremely detailed, solid color background,
156
+ super detail, high detail, edge lighting, soft focus,
157
+ light and dark contrast, 8k, high detail, edge lighting,
158
+ 3d, c4d, blender, oc renderer, ultra high definition, 3d rendering
159
+ '''
160
+
161
+ def get_pose(image):
162
+ return pose_model(image)
163
+
164
+
165
+ # def generate_texture(input_shape, text, seed, guidance_scale):
166
+ # iface = gr.Interface.load("spaces/TEXTurePaper/TEXTure")
167
+ # output_shape = iface(input_shape, text, seed, guidance_scale)
168
+ # return output_shape
169
+
170
+
171
+ def generate_images(image, prompt, image_file_live_opt='file', live_conditioning=None):
172
+ if image is None and 'image' not in live_conditioning:
173
+ raise gr.Error("Please provide an image")
174
+ try:
175
+ if image_file_live_opt == 'file':
176
+ pose = get_pose(image)
177
+ elif image_file_live_opt == 'webcam':
178
+ base64_img = live_conditioning['image']
179
+ image_data = base64.b64decode(base64_img.split(',')[1])
180
+ pose = Image.open(BytesIO(image_data)).convert('RGB').resize((512, 512))
181
+ output = pipe(
182
+ prompt,
183
+ pose,
184
+ generator=generator,
185
+ num_images_per_prompt=3,
186
+ num_inference_steps=20,
187
+ )
188
+ all_outputs = []
189
+ all_outputs.append(pose)
190
+ for image in output.images:
191
+ all_outputs.append(image)
192
+ return all_outputs, all_outputs
193
+ except Exception as e:
194
+ raise gr.Error(str(e))
195
+
196
+
197
+ def toggle(choice):
198
+ if choice == "file":
199
+ return gr.update(visible=True, value=None), gr.update(visible=False, value=None)
200
+ elif choice == "webcam":
201
+ return gr.update(visible=False, value=None), gr.update(visible=True, value=canvas_html)
202
+
203
+
204
+ examples_pose = glob.glob('examples/pose/*')
205
+ examples_cloth = glob.glob('examples/cloth/*')
206
+
207
+ default_step = 50
208
+
209
+ with gr.Blocks() as demo:
210
+ gr.Markdown(description)
211
+
212
+ out_lst = []
213
+ with gr.Row():
214
+ with gr.Column():
215
+ with gr.Row():
216
+
217
+ live_conditioning = gr.JSON(value={}, visible=False)
218
+
219
+ with gr.Column():
220
+ image_file_live_opt = gr.Radio(["file", "webcam"],
221
+ value="file",
222
+ label="How would you like to upload your image?")
223
+
224
+ with gr.Row():
225
+ image_in_img = gr.Image(source="upload", visible=True, type="pil", label="Image for Pose")
226
+ canvas = gr.HTML(None, elem_id="canvas_html", visible=False)
227
+
228
+ image_file_live_opt.change(
229
+ fn=toggle,
230
+ inputs=[image_file_live_opt],
231
+ outputs=[image_in_img, canvas],
232
+ queue=False
233
+ )
234
+ prompt = gr.Textbox(
235
+ label="Enter your prompt to synthesise the image",
236
+ max_lines=10,
237
+ placeholder=
238
+ "best quality, extremely detailed",
239
+ )
240
+
241
+ gr.Markdown(hint_prompts)
242
+
243
+ with gr.Column():
244
+ gallery = gr.Gallery().style(grid=[2], height="auto")
245
+ gallery_cache = gr.State()
246
+ inp = gr.Image(type="filepath", label="Input Image for ECON")
247
+ fitting_step = gr.inputs.Slider(
248
+ 10, 100, step=10, label='Fitting steps', default=default_step
249
+ )
250
+
251
+ with gr.Row():
252
+ btn_sample = gr.Button("Generate Image")
253
+ btn_submit = gr.Button("Submit Image (~2min)")
254
+
255
+ btn_sample.click(
256
+ fn=generate_images,
257
+ inputs=[image_in_img, prompt, image_file_live_opt, live_conditioning],
258
+ outputs=[gallery, gallery_cache],
259
+ _js=get_js_image
260
+ )
261
+
262
+ def get_select_index(cache, evt: gr.SelectData):
263
+ return cache[evt.index]
264
+
265
+ gallery.select(
266
+ fn=get_select_index,
267
+ inputs=[gallery_cache],
268
+ outputs=[inp],
269
+ )
270
+
271
+ with gr.Row():
272
+
273
+ gr.Examples(
274
+ examples=list(examples_pose),
275
+ inputs=[inp],
276
+ cache_examples=False,
277
+ fn=generate_model,
278
+ outputs=out_lst,
279
+ label="Hard Pose Exampels"
280
+ )
281
+ gr.Examples(
282
+ examples=list(examples_cloth),
283
+ inputs=[inp],
284
+ cache_examples=False,
285
+ fn=generate_model,
286
+ outputs=out_lst,
287
+ label="Loose Cloth Exampels"
288
+ )
289
+
290
+ with gr.Column():
291
+ overlap_inp = gr.Image(type="filepath", label="Image Normal Overlap")
292
+ with gr.Row():
293
+ out_final = gr.Model3D(clear_color=[0.0, 0.0, 0.0, 0.0], label="Clothed human")
294
+ out_smpl = gr.Model3D(clear_color=[0.0, 0.0, 0.0, 0.0], label="SMPL-X body")
295
+
296
+ out_final_obj = gr.State()
297
+ vis_tensor_path = gr.State()
298
+
299
+ with gr.Row():
300
+ btn_video = gr.Button("Generate Video (~2min)")
301
+ with gr.Row():
302
+ out_vid = gr.Video(label="Shared on Twitter with #ECON")
303
+
304
+ # with gr.Row():
305
+ # btn_texture = gr.Button("Generate Full-texture")
306
+
307
+ # with gr.Row():
308
+ # prompt = gr.Textbox(
309
+ # label="Enter your prompt to texture the mesh",
310
+ # max_lines=10,
311
+ # placeholder=
312
+ # "best quality, extremely detailed, solid color background, super detail, high detail, edge lighting, soft focus, light and dark contrast, 8k, high detail, edge lighting, 3d, c4d, blender, oc renderer, ultra high definition, 3d rendering",
313
+ # )
314
+ # seed = gr.Slider(label='Seed', minimum=0, maximum=100000, value=3, step=1)
315
+ # guidance_scale = gr.Slider(
316
+ # label='Guidance scale', minimum=0, maximum=50, value=7.5, step=0.1
317
+ # )
318
+
319
+ # progress_text = gr.Text(label='Progress')
320
+
321
+ # with gr.Tabs():
322
+ # with gr.TabItem(label='Images from each viewpoint'):
323
+ # viewpoint_images = gr.Gallery(show_label=False)
324
+ # with gr.TabItem(label='Result video'):
325
+ # result_video = gr.Video(show_label=False)
326
+ # with gr.TabItem(label='Output mesh file'):
327
+ # output_file = gr.File(show_label=False)
328
+
329
+ out_lst = [out_smpl, out_final, out_final_obj, overlap_inp, vis_tensor_path]
330
+
331
+ btn_video.click(
332
+ fn=generate_video,
333
+ inputs=[vis_tensor_path],
334
+ outputs=[out_vid],
335
+ )
336
+
337
+ btn_submit.click(fn=generate_model, inputs=[inp, fitting_step], outputs=out_lst)
338
+ # btn_texture.click(
339
+ # fn=generate_texture,
340
+ # inputs=[out_final_obj, prompt, seed, guidance_scale],
341
+ # outputs=[viewpoint_images, result_video, output_file, progress_text]
342
+ # )
343
+
344
+ demo.load(None, None, None, _js=load_js)
345
+
346
+ if __name__ == "__main__":
347
+
348
+ # demo.launch(debug=False, enable_queue=False,
349
+ # auth=(os.environ['USER'], os.environ['PASSWORD']),
350
+ # auth_message="Register at icon.is.tue.mpg.de to get HuggingFace username and password.")
351
+
352
+ demo.launch(debug=True, enable_queue=True)
apps/__init__.py ADDED
File without changes
apps/infer.py CHANGED
@@ -21,7 +21,6 @@ warnings.filterwarnings("ignore")
21
  logging.getLogger("lightning").setLevel(logging.ERROR)
22
  logging.getLogger("trimesh").setLevel(logging.ERROR)
23
 
24
- import argparse
25
  import os
26
 
27
  import numpy as np
@@ -39,7 +38,7 @@ from lib.common.BNI_utils import save_normal_tensor
39
  from lib.common.config import cfg
40
  from lib.common.imutils import blend_rgb_norm
41
  from lib.common.local_affine import register
42
- from lib.common.render import query_color
43
  from lib.common.train_util import Format, init_loss
44
  from lib.common.voxelize import VoxelGrid
45
  from lib.dataset.mesh_util import *
@@ -48,32 +47,39 @@ from lib.net.geometry import rot6d_to_rotmat, rotation_matrix_to_angle_axis
48
 
49
  torch.backends.cudnn.benchmark = True
50
 
51
- if __name__ == "__main__":
52
 
53
- # loading cfg file
54
- parser = argparse.ArgumentParser()
55
 
56
- parser.add_argument("-gpu", "--gpu_device", type=int, default=0)
57
- parser.add_argument("-loop_smpl", "--loop_smpl", type=int, default=50)
58
- parser.add_argument("-patience", "--patience", type=int, default=5)
59
- parser.add_argument("-in_dir", "--in_dir", type=str, default="./examples")
60
- parser.add_argument("-out_dir", "--out_dir", type=str, default="./results")
61
- parser.add_argument("-seg_dir", "--seg_dir", type=str, default=None)
62
- parser.add_argument("-cfg", "--config", type=str, default="./configs/econ.yaml")
63
- parser.add_argument("-multi", action="store_false")
64
- parser.add_argument("-novis", action="store_true")
65
 
66
- args = parser.parse_args()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  # cfg read and merge
69
- cfg.merge_from_file(args.config)
70
  cfg.merge_from_file("./lib/pymafx/configs/pymafx_config.yaml")
71
- device = torch.device(f"cuda:{args.gpu_device}")
72
 
73
  # setting for testing on in-the-wild images
74
  cfg_show_list = [
75
- "test_gpus", [args.gpu_device], "mcube_res", 512, "clean_mesh", True, "test_mode", True,
76
- "batch_size", 1
77
  ]
78
 
79
  cfg.merge_from_list(cfg_show_list)
@@ -95,12 +101,11 @@ if __name__ == "__main__":
95
  SMPLX_object = SMPLX()
96
 
97
  dataset_param = {
98
- "image_dir": args.in_dir,
99
- "seg_dir": args.seg_dir,
100
  "use_seg": True, # w/ or w/o segmentation
101
  "hps_type": cfg.bni.hps_type, # pymafx/pixie
102
  "vol_res": cfg.vol_res,
103
- "single": args.multi,
104
  }
105
 
106
  if cfg.bni.use_ifnet:
@@ -120,541 +125,534 @@ if __name__ == "__main__":
120
 
121
  print(colored(f"Dataset Size: {len(dataset)}", "green"))
122
 
123
- pbar = tqdm(dataset)
124
 
125
- for data in pbar:
126
 
127
- losses = init_loss()
128
 
129
- pbar.set_description(f"{data['name']}")
 
 
 
 
130
 
131
- # final results rendered as image (PNG)
132
- # 1. Render the final fitted SMPL (xxx_smpl.png)
133
- # 2. Render the final reconstructed clothed human (xxx_cloth.png)
134
- # 3. Blend the original image with predicted cloth normal (xxx_overlap.png)
135
- # 4. Blend the cropped image with predicted cloth normal (xxx_crop.png)
136
 
137
- os.makedirs(osp.join(args.out_dir, cfg.name, "png"), exist_ok=True)
 
 
 
 
 
 
 
138
 
139
- # final reconstruction meshes (OBJ)
140
- # 1. SMPL mesh (xxx_smpl_xx.obj)
141
- # 2. SMPL params (xxx_smpl.npy)
142
- # 3. d-BiNI surfaces (xxx_BNI.obj)
143
- # 4. seperate face/hand mesh (xxx_hand/face.obj)
144
- # 5. full shape impainted by IF-Nets+ after remeshing (xxx_IF.obj)
145
- # 6. sideded or occluded parts (xxx_side.obj)
146
- # 7. final reconstructed clothed human (xxx_full.obj)
147
 
148
- os.makedirs(osp.join(args.out_dir, cfg.name, "obj"), exist_ok=True)
 
 
 
149
 
150
- in_tensor = {
151
- "smpl_faces": data["smpl_faces"], "image": data["img_icon"].to(device), "mask":
152
- data["img_mask"].to(device)
153
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
154
 
155
- # The optimizer and variables
156
- optimed_pose = data["body_pose"].requires_grad_(True)
157
- optimed_trans = data["trans"].requires_grad_(True)
158
- optimed_betas = data["betas"].requires_grad_(True)
159
- optimed_orient = data["global_orient"].requires_grad_(True)
160
 
161
- optimizer_smpl = torch.optim.Adam([
162
- optimed_pose, optimed_trans, optimed_betas, optimed_orient
163
- ],
164
- lr=1e-2,
165
- amsgrad=True)
166
- scheduler_smpl = torch.optim.lr_scheduler.ReduceLROnPlateau(
167
- optimizer_smpl,
168
- mode="min",
169
- factor=0.5,
170
- verbose=0,
171
- min_lr=1e-5,
172
- patience=args.patience,
 
 
 
 
 
 
 
 
 
 
 
 
 
173
  )
174
 
175
- # [result_loop_1, result_loop_2, ...]
176
- per_data_lst = []
 
 
 
177
 
178
- N_body, N_pose = optimed_pose.shape[:2]
 
 
179
 
180
- smpl_path = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_smpl_00.obj"
181
 
182
- # remove this line if you change the loop_smpl and obtain different SMPL-X fits
183
- if osp.exists(smpl_path):
184
 
185
- smpl_verts_lst = []
186
- smpl_faces_lst = []
187
 
188
- for idx in range(N_body):
189
 
190
- smpl_obj = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_smpl_{idx:02d}.obj"
191
- smpl_mesh = trimesh.load(smpl_obj)
192
- smpl_verts = torch.tensor(smpl_mesh.vertices).to(device).float()
193
- smpl_faces = torch.tensor(smpl_mesh.faces).to(device).long()
194
- smpl_verts_lst.append(smpl_verts)
195
- smpl_faces_lst.append(smpl_faces)
196
 
197
- batch_smpl_verts = torch.stack(smpl_verts_lst)
198
- batch_smpl_faces = torch.stack(smpl_faces_lst)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
199
 
200
  # render optimized mesh as normal [-1,1]
201
  in_tensor["T_normal_F"], in_tensor["T_normal_B"] = dataset.render_normal(
202
- batch_smpl_verts, batch_smpl_faces
 
203
  )
204
 
 
 
205
  with torch.no_grad():
206
  in_tensor["normal_F"], in_tensor["normal_B"] = normal_net.netG(in_tensor)
207
 
208
- in_tensor["smpl_verts"] = batch_smpl_verts * torch.tensor([1., -1., 1.]).to(device)
209
- in_tensor["smpl_faces"] = batch_smpl_faces[:, :, [0, 2, 1]]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
210
 
211
- else:
212
- # smpl optimization
213
- loop_smpl = tqdm(range(args.loop_smpl))
214
 
215
- for i in loop_smpl:
 
216
 
217
- per_loop_lst = []
218
 
219
- optimizer_smpl.zero_grad()
 
 
 
 
 
 
 
220
 
221
- N_body, N_pose = optimed_pose.shape[:2]
 
222
 
223
- # 6d_rot to rot_mat
224
- optimed_orient_mat = rot6d_to_rotmat(optimed_orient.view(-1,
225
- 6)).view(N_body, 1, 3, 3)
226
- optimed_pose_mat = rot6d_to_rotmat(optimed_pose.view(-1,
227
- 6)).view(N_body, N_pose, 3, 3)
228
 
229
- smpl_verts, smpl_landmarks, smpl_joints = dataset.smpl_model(
230
- shape_params=optimed_betas,
231
- expression_params=tensor2variable(data["exp"], device),
232
- body_pose=optimed_pose_mat,
233
- global_pose=optimed_orient_mat,
234
- jaw_pose=tensor2variable(data["jaw_pose"], device),
235
- left_hand_pose=tensor2variable(data["left_hand_pose"], device),
236
- right_hand_pose=tensor2variable(data["right_hand_pose"], device),
237
- )
238
 
239
- smpl_verts = (smpl_verts + optimed_trans) * data["scale"]
240
- smpl_joints = (smpl_joints + optimed_trans) * data["scale"] * torch.tensor([
241
- 1.0, 1.0, -1.0
242
- ]).to(device)
243
-
244
- # landmark errors
245
- smpl_joints_3d = (
246
- smpl_joints[:, dataset.smpl_data.smpl_joint_ids_45_pixie, :] + 1.0
247
- ) * 0.5
248
- in_tensor["smpl_joint"] = smpl_joints[:,
249
- dataset.smpl_data.smpl_joint_ids_24_pixie, :]
250
-
251
- ghum_lmks = data["landmark"][:, SMPLX_object.ghum_smpl_pairs[:, 0], :2].to(device)
252
- ghum_conf = data["landmark"][:, SMPLX_object.ghum_smpl_pairs[:, 0], -1].to(device)
253
- smpl_lmks = smpl_joints_3d[:, SMPLX_object.ghum_smpl_pairs[:, 1], :2]
254
-
255
- # render optimized mesh as normal [-1,1]
256
- in_tensor["T_normal_F"], in_tensor["T_normal_B"] = dataset.render_normal(
257
- smpl_verts * torch.tensor([1.0, -1.0, -1.0]).to(device),
258
- in_tensor["smpl_faces"],
259
- )
260
 
261
- T_mask_F, T_mask_B = dataset.render.get_image(type="mask")
262
-
263
- with torch.no_grad():
264
- in_tensor["normal_F"], in_tensor["normal_B"] = normal_net.netG(in_tensor)
265
-
266
- diff_F_smpl = torch.abs(in_tensor["T_normal_F"] - in_tensor["normal_F"])
267
- diff_B_smpl = torch.abs(in_tensor["T_normal_B"] - in_tensor["normal_B"])
268
-
269
- # silhouette loss
270
- smpl_arr = torch.cat([T_mask_F, T_mask_B], dim=-1)
271
- gt_arr = in_tensor["mask"].repeat(1, 1, 2)
272
- diff_S = torch.abs(smpl_arr - gt_arr)
273
- losses["silhouette"]["value"] = diff_S.mean()
274
-
275
- # large cloth_overlap --> big difference between body and cloth mask
276
- # for loose clothing, reply more on landmarks instead of silhouette+normal loss
277
- cloth_overlap = diff_S.sum(dim=[1, 2]) / gt_arr.sum(dim=[1, 2])
278
- cloth_overlap_flag = cloth_overlap > cfg.cloth_overlap_thres
279
- losses["joint"]["weight"] = [50.0 if flag else 5.0 for flag in cloth_overlap_flag]
280
-
281
- # small body_overlap --> large occlusion or out-of-frame
282
- # for highly occluded body, reply only on high-confidence landmarks, no silhouette+normal loss
283
-
284
- # BUG: PyTorch3D silhouette renderer generates dilated mask
285
- bg_value = in_tensor["T_normal_F"][0, 0, 0, 0]
286
- smpl_arr_fake = torch.cat([
287
- in_tensor["T_normal_F"][:, 0].ne(bg_value).float(),
288
- in_tensor["T_normal_B"][:, 0].ne(bg_value).float()
289
- ],
290
- dim=-1)
291
-
292
- body_overlap = (gt_arr * smpl_arr_fake.gt(0.0)
293
- ).sum(dim=[1, 2]) / smpl_arr_fake.gt(0.0).sum(dim=[1, 2])
294
- body_overlap_mask = (gt_arr * smpl_arr_fake).unsqueeze(1)
295
- body_overlap_flag = body_overlap < cfg.body_overlap_thres
296
-
297
- losses["normal"]["value"] = (
298
- diff_F_smpl * body_overlap_mask[..., :512] +
299
- diff_B_smpl * body_overlap_mask[..., 512:]
300
- ).mean() / 2.0
301
-
302
- losses["silhouette"]["weight"] = [0 if flag else 1.0 for flag in body_overlap_flag]
303
- occluded_idx = torch.where(body_overlap_flag)[0]
304
- ghum_conf[occluded_idx] *= ghum_conf[occluded_idx] > 0.95
305
- losses["joint"]["value"] = (torch.norm(ghum_lmks - smpl_lmks, dim=2) *
306
- ghum_conf).mean(dim=1)
307
-
308
- # Weighted sum of the losses
309
- smpl_loss = 0.0
310
- pbar_desc = "Body Fitting -- "
311
- for k in ["normal", "silhouette", "joint"]:
312
- per_loop_loss = (
313
- losses[k]["value"] * torch.tensor(losses[k]["weight"]).to(device)
314
- ).mean()
315
- pbar_desc += f"{k}: {per_loop_loss:.3f} | "
316
- smpl_loss += per_loop_loss
317
- pbar_desc += f"Total: {smpl_loss:.3f}"
318
- loose_str = ''.join([str(j) for j in cloth_overlap_flag.int().tolist()])
319
- occlude_str = ''.join([str(j) for j in body_overlap_flag.int().tolist()])
320
- pbar_desc += colored(f"| loose:{loose_str}, occluded:{occlude_str}", "yellow")
321
- loop_smpl.set_description(pbar_desc)
322
-
323
- # save intermediate results
324
- if (i == args.loop_smpl - 1) and (not args.novis):
325
-
326
- per_loop_lst.extend([
327
- in_tensor["image"],
328
- in_tensor["T_normal_F"],
329
- in_tensor["normal_F"],
330
- diff_S[:, :, :512].unsqueeze(1).repeat(1, 3, 1, 1),
331
- ])
332
- per_loop_lst.extend([
333
- in_tensor["image"],
334
- in_tensor["T_normal_B"],
335
- in_tensor["normal_B"],
336
- diff_S[:, :, 512:].unsqueeze(1).repeat(1, 3, 1, 1),
337
- ])
338
- per_data_lst.append(
339
- get_optim_grid_image(per_loop_lst, None, nrow=N_body * 2, type="smpl")
340
- )
341
-
342
- smpl_loss.backward()
343
- optimizer_smpl.step()
344
- scheduler_smpl.step(smpl_loss)
345
-
346
- in_tensor["smpl_verts"] = smpl_verts * torch.tensor([1.0, 1.0, -1.0]).to(device)
347
- in_tensor["smpl_faces"] = in_tensor["smpl_faces"][:, :, [0, 2, 1]]
348
-
349
- if not args.novis:
350
- per_data_lst[-1].save(
351
- osp.join(args.out_dir, cfg.name, f"png/{data['name']}_smpl.png")
352
- )
353
 
354
- if not args.novis:
355
- img_crop_path = osp.join(args.out_dir, cfg.name, "png", f"{data['name']}_crop.png")
356
- torchvision.utils.save_image(
357
- torch.cat([
358
- data["img_crop"][:, :3], (in_tensor['normal_F'].detach().cpu() + 1.0) * 0.5,
359
- (in_tensor['normal_B'].detach().cpu() + 1.0) * 0.5
360
- ],
361
- dim=3), img_crop_path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
362
  )
 
363
 
364
- rgb_norm_F = blend_rgb_norm(in_tensor["normal_F"], data)
365
- rgb_norm_B = blend_rgb_norm(in_tensor["normal_B"], data)
 
 
 
366
 
367
- img_overlap_path = osp.join(args.out_dir, cfg.name, f"png/{data['name']}_overlap.png")
368
- torchvision.utils.save_image(
369
- torch.cat([data["img_raw"], rgb_norm_F, rgb_norm_B], dim=-1) / 255.,
370
- img_overlap_path
371
- )
372
 
373
- smpl_obj_lst = []
 
374
 
375
- for idx in range(N_body):
376
 
377
- smpl_obj = trimesh.Trimesh(
378
- in_tensor["smpl_verts"].detach().cpu()[idx] * torch.tensor([1.0, -1.0, 1.0]),
379
- in_tensor["smpl_faces"].detach().cpu()[0][:, [0, 2, 1]],
380
- process=False,
381
- maintains_order=True,
382
- )
383
 
384
- smpl_obj_path = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_smpl_{idx:02d}.obj"
385
-
386
- if not osp.exists(smpl_obj_path):
387
- smpl_obj.export(smpl_obj_path)
388
- smpl_info = {
389
- "betas":
390
- optimed_betas[idx].detach().cpu().unsqueeze(0),
391
- "body_pose":
392
- rotation_matrix_to_angle_axis(optimed_pose_mat[idx].detach()
393
- ).cpu().unsqueeze(0),
394
- "global_orient":
395
- rotation_matrix_to_angle_axis(optimed_orient_mat[idx].detach()
396
- ).cpu().unsqueeze(0),
397
- "transl":
398
- optimed_trans[idx].detach().cpu(),
399
- "expression":
400
- data["exp"][idx].cpu().unsqueeze(0),
401
- "jaw_pose":
402
- rotation_matrix_to_angle_axis(data["jaw_pose"][idx]).cpu().unsqueeze(0),
403
- "left_hand_pose":
404
- rotation_matrix_to_angle_axis(data["left_hand_pose"][idx]).cpu().unsqueeze(0),
405
- "right_hand_pose":
406
- rotation_matrix_to_angle_axis(data["right_hand_pose"][idx]).cpu().unsqueeze(0),
407
- "scale":
408
- data["scale"][idx].cpu(),
409
- }
410
- np.save(
411
- smpl_obj_path.replace(".obj", ".npy"),
412
- smpl_info,
413
- allow_pickle=True,
414
- )
415
- smpl_obj_lst.append(smpl_obj)
416
 
417
- del optimizer_smpl
418
- del optimed_betas
419
- del optimed_orient
420
- del optimed_pose
421
- del optimed_trans
422
 
423
- torch.cuda.empty_cache()
 
 
 
424
 
425
- # ------------------------------------------------------------------------------------------------------------------
426
- # clothing refinement
427
 
428
- per_data_lst = []
429
 
430
- batch_smpl_verts = in_tensor["smpl_verts"].detach() * torch.tensor([1.0, -1.0, 1.0],
431
- device=device)
432
- batch_smpl_faces = in_tensor["smpl_faces"].detach()[:, :, [0, 2, 1]]
 
433
 
434
- in_tensor["depth_F"], in_tensor["depth_B"] = dataset.render_depth(
435
- batch_smpl_verts, batch_smpl_faces
 
 
 
 
436
  )
437
 
438
- per_loop_lst = []
 
 
 
 
 
 
 
439
 
440
- in_tensor["BNI_verts"] = []
441
- in_tensor["BNI_faces"] = []
442
- in_tensor["body_verts"] = []
443
- in_tensor["body_faces"] = []
444
 
445
- for idx in range(N_body):
 
446
 
447
- final_path = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_full.obj"
 
448
 
449
- side_mesh = smpl_obj_lst[idx].copy()
450
- face_mesh = smpl_obj_lst[idx].copy()
451
- hand_mesh = smpl_obj_lst[idx].copy()
452
- smplx_mesh = smpl_obj_lst[idx].copy()
453
 
454
- # save normals, depths and masks
455
- BNI_dict = save_normal_tensor(
456
- in_tensor,
457
- idx,
458
- osp.join(args.out_dir, cfg.name, f"BNI/{data['name']}_{idx}"),
459
- cfg.bni.thickness,
460
- )
461
 
462
- # BNI process
463
- BNI_object = BNI(
464
- dir_path=osp.join(args.out_dir, cfg.name, "BNI"),
465
- name=data["name"],
466
- BNI_dict=BNI_dict,
467
- cfg=cfg.bni,
468
- device=device
 
469
  )
470
 
471
- BNI_object.extract_surface(False)
 
 
 
 
 
 
 
 
 
 
472
 
473
- in_tensor["body_verts"].append(torch.tensor(smpl_obj_lst[idx].vertices).float())
474
- in_tensor["body_faces"].append(torch.tensor(smpl_obj_lst[idx].faces).long())
475
 
476
- # requires shape completion when low overlap
477
- # replace SMPL by completed mesh as side_mesh
478
 
479
- if cfg.bni.use_ifnet:
 
 
 
 
 
 
 
480
 
481
- side_mesh_path = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_IF.obj"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
482
 
483
- side_mesh = apply_face_mask(side_mesh, ~SMPLX_object.smplx_eyeball_fid_mask)
484
 
485
- # mesh completion via IF-net
486
- in_tensor.update(
487
- dataset.depth_to_voxel({
488
- "depth_F": BNI_object.F_depth.unsqueeze(0), "depth_B":
489
- BNI_object.B_depth.unsqueeze(0)
490
- })
 
 
491
  )
492
 
493
- occupancies = VoxelGrid.from_mesh(side_mesh, cfg.vol_res, loc=[
494
- 0,
495
- ] * 3, scale=2.0).data.transpose(2, 1, 0)
496
- occupancies = np.flip(occupancies, axis=1)
 
 
 
 
 
 
 
 
 
 
 
 
 
497
 
498
- in_tensor["body_voxels"] = torch.tensor(occupancies.copy()
499
- ).float().unsqueeze(0).to(device)
500
 
501
- with torch.no_grad():
502
- sdf = ifnet.reconEngine(netG=ifnet.netG, batch=in_tensor)
503
- verts_IF, faces_IF = ifnet.reconEngine.export_mesh(sdf)
 
504
 
505
- if ifnet.clean_mesh_flag:
506
- verts_IF, faces_IF = clean_mesh(verts_IF, faces_IF)
507
 
508
- side_mesh = trimesh.Trimesh(verts_IF, faces_IF)
509
- side_mesh = remesh_laplacian(side_mesh, side_mesh_path)
 
510
 
511
- else:
512
- side_mesh = apply_vertex_mask(
513
- side_mesh,
514
- (
515
- SMPLX_object.front_flame_vertex_mask + SMPLX_object.smplx_mano_vertex_mask +
516
- SMPLX_object.eyeball_vertex_mask
517
- ).eq(0).float(),
518
- )
519
 
520
- #register side_mesh to BNI surfaces
521
- side_mesh = Meshes(
522
- verts=[torch.tensor(side_mesh.vertices).float()],
523
- faces=[torch.tensor(side_mesh.faces).long()],
524
- ).to(device)
525
- sm = SubdivideMeshes(side_mesh)
526
- side_mesh = register(BNI_object.F_B_trimesh, sm(side_mesh), device)
527
-
528
- side_verts = torch.tensor(side_mesh.vertices).float().to(device)
529
- side_faces = torch.tensor(side_mesh.faces).long().to(device)
530
-
531
- # Possion Fusion between SMPLX and BNI
532
- # 1. keep the faces invisible to front+back cameras
533
- # 2. keep the front-FLAME+MANO faces
534
- # 3. remove eyeball faces
535
-
536
- # export intermediate meshes
537
- BNI_object.F_B_trimesh.export(
538
- f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_BNI.obj"
539
- )
540
- full_lst = []
541
-
542
- if "face" in cfg.bni.use_smpl:
543
-
544
- # only face
545
- face_mesh = apply_vertex_mask(face_mesh, SMPLX_object.front_flame_vertex_mask)
546
- face_mesh.vertices = face_mesh.vertices - np.array([0, 0, cfg.bni.thickness])
547
-
548
- # remove face neighbor triangles
549
- BNI_object.F_B_trimesh = part_removal(
550
- BNI_object.F_B_trimesh,
551
- face_mesh,
552
- cfg.bni.face_thres,
553
- device,
554
- smplx_mesh,
555
- region="face"
556
- )
557
- side_mesh = part_removal(
558
- side_mesh, face_mesh, cfg.bni.face_thres, device, smplx_mesh, region="face"
559
- )
560
- face_mesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_face.obj")
561
- full_lst += [face_mesh]
562
-
563
- if "hand" in cfg.bni.use_smpl and (True in data['hands_visibility'][idx]):
564
-
565
- hand_mask = torch.zeros(SMPLX_object.smplx_verts.shape[0], )
566
- if data['hands_visibility'][idx][0]:
567
- hand_mask.index_fill_(
568
- 0, torch.tensor(SMPLX_object.smplx_mano_vid_dict["left_hand"]), 1.0
569
- )
570
- if data['hands_visibility'][idx][1]:
571
- hand_mask.index_fill_(
572
- 0, torch.tensor(SMPLX_object.smplx_mano_vid_dict["right_hand"]), 1.0
573
- )
574
-
575
- # only hands
576
- hand_mesh = apply_vertex_mask(hand_mesh, hand_mask)
577
-
578
- # remove hand neighbor triangles
579
- BNI_object.F_B_trimesh = part_removal(
580
- BNI_object.F_B_trimesh,
581
- hand_mesh,
582
- cfg.bni.hand_thres,
583
- device,
584
- smplx_mesh,
585
- region="hand"
586
- )
587
- side_mesh = part_removal(
588
- side_mesh, hand_mesh, cfg.bni.hand_thres, device, smplx_mesh, region="hand"
589
- )
590
- hand_mesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_hand.obj")
591
- full_lst += [hand_mesh]
592
 
593
- full_lst += [BNI_object.F_B_trimesh]
594
 
595
- # initial side_mesh could be SMPLX or IF-net
596
- side_mesh = part_removal(
597
- side_mesh, sum(full_lst), 2e-2, device, smplx_mesh, region="", clean=False
 
 
 
598
  )
 
 
 
599
 
600
- full_lst += [side_mesh]
601
 
602
- # # export intermediate meshes
603
- BNI_object.F_B_trimesh.export(
604
- f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_BNI.obj"
605
- )
606
- side_mesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_side.obj")
607
 
608
- if cfg.bni.use_poisson:
609
- final_mesh = poisson(
610
- sum(full_lst),
611
- final_path,
612
- cfg.bni.poisson_depth,
613
- )
614
- print(
615
- colored(
616
- f"\n Poisson completion to {Format.start} {final_path} {Format.end}",
617
- "yellow"
618
- )
619
- )
620
- else:
621
- final_mesh = sum(full_lst)
622
- final_mesh.export(final_path)
623
-
624
- if not args.novis:
625
- dataset.render.load_meshes(final_mesh.vertices, final_mesh.faces)
626
- rotate_recon_lst = dataset.render.get_image(cam_type="four")
627
- per_loop_lst.extend([in_tensor['image'][idx:idx + 1]] + rotate_recon_lst)
628
-
629
- if cfg.bni.texture_src == 'image':
630
-
631
- # coloring the final mesh (front: RGB pixels, back: normal colors)
632
- final_colors = query_color(
633
- torch.tensor(final_mesh.vertices).float(),
634
- torch.tensor(final_mesh.faces).long(),
635
- in_tensor["image"][idx:idx + 1],
636
- device=device,
637
- )
638
- final_mesh.visual.vertex_colors = final_colors
639
- final_mesh.export(final_path)
640
 
641
- elif cfg.bni.texture_src == 'SD':
 
642
 
643
- # !TODO: add texture from Stable Diffusion
644
- pass
 
645
 
646
- if len(per_loop_lst) > 0 and (not args.novis):
 
 
 
647
 
648
- per_data_lst.append(get_optim_grid_image(per_loop_lst, None, nrow=5, type="cloth"))
649
- per_data_lst[-1].save(osp.join(args.out_dir, cfg.name, f"png/{data['name']}_cloth.png"))
 
 
 
 
650
 
651
- # for video rendering
652
- in_tensor["BNI_verts"].append(torch.tensor(final_mesh.vertices).float())
653
- in_tensor["BNI_faces"].append(torch.tensor(final_mesh.faces).long())
 
654
 
655
- os.makedirs(osp.join(args.out_dir, cfg.name, "vid"), exist_ok=True)
656
- in_tensor["uncrop_param"] = data["uncrop_param"]
657
- in_tensor["img_raw"] = data["img_raw"]
658
- torch.save(
659
- in_tensor, osp.join(args.out_dir, cfg.name, f"vid/{data['name']}_in_tensor.pt")
660
- )
 
 
21
  logging.getLogger("lightning").setLevel(logging.ERROR)
22
  logging.getLogger("trimesh").setLevel(logging.ERROR)
23
 
 
24
  import os
25
 
26
  import numpy as np
 
38
  from lib.common.config import cfg
39
  from lib.common.imutils import blend_rgb_norm
40
  from lib.common.local_affine import register
41
+ from lib.common.render import query_color, Render
42
  from lib.common.train_util import Format, init_loss
43
  from lib.common.voxelize import VoxelGrid
44
  from lib.dataset.mesh_util import *
 
47
 
48
  torch.backends.cudnn.benchmark = True
49
 
50
+ def generate_video(vis_tensor_path):
51
 
52
+ in_tensor = torch.load(vis_tensor_path)
 
53
 
54
+ render = Render(size=512, device=torch.device("cuda:0"))
 
 
 
 
 
 
 
 
55
 
56
+ # visualize the final results in self-rotation mode
57
+ verts_lst = in_tensor["body_verts"] + in_tensor["BNI_verts"]
58
+ faces_lst = in_tensor["body_faces"] + in_tensor["BNI_faces"]
59
+
60
+ # self-rotated video
61
+ tmp_path = vis_tensor_path.replace("_in_tensor.pt", "_tmp.mp4")
62
+ out_path = vis_tensor_path.replace("_in_tensor.pt", ".mp4")
63
+
64
+ render.load_meshes(verts_lst, faces_lst)
65
+ render.get_rendered_video_multi(in_tensor, tmp_path)
66
+
67
+ os.system(f'ffmpeg -y -loglevel quiet -stats -i {tmp_path} -c:v libx264 {out_path}')
68
+
69
+ return out_path, out_path
70
+
71
+ def generate_model(in_path, fitting_step=50):
72
+
73
+ out_dir = "./results"
74
 
75
  # cfg read and merge
76
+ cfg.merge_from_file("./configs/econ.yaml")
77
  cfg.merge_from_file("./lib/pymafx/configs/pymafx_config.yaml")
78
+ device = torch.device(f"cuda:0")
79
 
80
  # setting for testing on in-the-wild images
81
  cfg_show_list = [
82
+ "test_gpus", [0], "mcube_res", 512, "clean_mesh", True, "test_mode", True, "batch_size", 1
 
83
  ]
84
 
85
  cfg.merge_from_list(cfg_show_list)
 
101
  SMPLX_object = SMPLX()
102
 
103
  dataset_param = {
104
+ "image_path": in_path,
 
105
  "use_seg": True, # w/ or w/o segmentation
106
  "hps_type": cfg.bni.hps_type, # pymafx/pixie
107
  "vol_res": cfg.vol_res,
108
+ "single": True,
109
  }
110
 
111
  if cfg.bni.use_ifnet:
 
125
 
126
  print(colored(f"Dataset Size: {len(dataset)}", "green"))
127
 
128
+ data = dataset[0]
129
 
130
+ losses = init_loss()
131
 
132
+ print(f"{data['name']}")
133
 
134
+ # final results rendered as image (PNG)
135
+ # 1. Render the final fitted SMPL (xxx_smpl.png)
136
+ # 2. Render the final reconstructed clothed human (xxx_cloth.png)
137
+ # 3. Blend the original image with predicted cloth normal (xxx_overlap.png)
138
+ # 4. Blend the cropped image with predicted cloth normal (xxx_crop.png)
139
 
140
+ os.makedirs(osp.join(out_dir, cfg.name, "png"), exist_ok=True)
 
 
 
 
141
 
142
+ # final reconstruction meshes (OBJ)
143
+ # 1. SMPL mesh (xxx_smpl_xx.obj)
144
+ # 2. SMPL params (xxx_smpl.npy)
145
+ # 3. d-BiNI surfaces (xxx_BNI.obj)
146
+ # 4. seperate face/hand mesh (xxx_hand/face.obj)
147
+ # 5. full shape impainted by IF-Nets+ after remeshing (xxx_IF.obj)
148
+ # 6. sideded or occluded parts (xxx_side.obj)
149
+ # 7. final reconstructed clothed human (xxx_full.obj)
150
 
151
+ os.makedirs(osp.join(out_dir, cfg.name, "obj"), exist_ok=True)
 
 
 
 
 
 
 
152
 
153
+ in_tensor = {
154
+ "smpl_faces": data["smpl_faces"], "image": data["img_icon"].to(device), "mask":
155
+ data["img_mask"].to(device)
156
+ }
157
 
158
+ # The optimizer and variables
159
+ optimed_pose = data["body_pose"].requires_grad_(True)
160
+ optimed_trans = data["trans"].requires_grad_(True)
161
+ optimed_betas = data["betas"].requires_grad_(True)
162
+ optimed_orient = data["global_orient"].requires_grad_(True)
163
+
164
+ optimizer_smpl = torch.optim.Adam([optimed_pose, optimed_trans, optimed_betas, optimed_orient],
165
+ lr=1e-2,
166
+ amsgrad=True)
167
+ scheduler_smpl = torch.optim.lr_scheduler.ReduceLROnPlateau(
168
+ optimizer_smpl,
169
+ mode="min",
170
+ factor=0.5,
171
+ verbose=0,
172
+ min_lr=1e-5,
173
+ patience=5,
174
+ )
175
 
176
+ # [result_loop_1, result_loop_2, ...]
177
+ per_data_lst = []
 
 
 
178
 
179
+ N_body, N_pose = optimed_pose.shape[:2]
180
+
181
+ smpl_path = f"{out_dir}/{cfg.name}/obj/{data['name']}_smpl_00.obj"
182
+
183
+ # remove this line if you change the loop_smpl and obtain different SMPL-X fits
184
+ if osp.exists(smpl_path):
185
+
186
+ smpl_verts_lst = []
187
+ smpl_faces_lst = []
188
+
189
+ for idx in range(N_body):
190
+
191
+ smpl_obj = f"{out_dir}/{cfg.name}/obj/{data['name']}_smpl_{idx:02d}.obj"
192
+ smpl_mesh = trimesh.load(smpl_obj)
193
+ smpl_verts = torch.tensor(smpl_mesh.vertices).to(device).float()
194
+ smpl_faces = torch.tensor(smpl_mesh.faces).to(device).long()
195
+ smpl_verts_lst.append(smpl_verts)
196
+ smpl_faces_lst.append(smpl_faces)
197
+
198
+ batch_smpl_verts = torch.stack(smpl_verts_lst)
199
+ batch_smpl_faces = torch.stack(smpl_faces_lst)
200
+
201
+ # render optimized mesh as normal [-1,1]
202
+ in_tensor["T_normal_F"], in_tensor["T_normal_B"] = dataset.render_normal(
203
+ batch_smpl_verts, batch_smpl_faces
204
  )
205
 
206
+ with torch.no_grad():
207
+ in_tensor["normal_F"], in_tensor["normal_B"] = normal_net.netG(in_tensor)
208
+
209
+ in_tensor["smpl_verts"] = batch_smpl_verts * torch.tensor([1., -1., 1.]).to(device)
210
+ in_tensor["smpl_faces"] = batch_smpl_faces[:, :, [0, 2, 1]]
211
 
212
+ else:
213
+ # smpl optimization
214
+ loop_smpl = tqdm(range(fitting_step))
215
 
216
+ for i in loop_smpl:
217
 
218
+ per_loop_lst = []
 
219
 
220
+ optimizer_smpl.zero_grad()
 
221
 
222
+ N_body, N_pose = optimed_pose.shape[:2]
223
 
224
+ # 6d_rot to rot_mat
225
+ optimed_orient_mat = rot6d_to_rotmat(optimed_orient.view(-1, 6)).view(N_body, 1, 3, 3)
226
+ optimed_pose_mat = rot6d_to_rotmat(optimed_pose.view(-1, 6)).view(N_body, N_pose, 3, 3)
 
 
 
227
 
228
+ smpl_verts, smpl_landmarks, smpl_joints = dataset.smpl_model(
229
+ shape_params=optimed_betas,
230
+ expression_params=tensor2variable(data["exp"], device),
231
+ body_pose=optimed_pose_mat,
232
+ global_pose=optimed_orient_mat,
233
+ jaw_pose=tensor2variable(data["jaw_pose"], device),
234
+ left_hand_pose=tensor2variable(data["left_hand_pose"], device),
235
+ right_hand_pose=tensor2variable(data["right_hand_pose"], device),
236
+ )
237
+
238
+ smpl_verts = (smpl_verts + optimed_trans) * data["scale"]
239
+ smpl_joints = (smpl_joints + optimed_trans) * data["scale"] * torch.tensor([
240
+ 1.0, 1.0, -1.0
241
+ ]).to(device)
242
+
243
+ # landmark errors
244
+ smpl_joints_3d = (
245
+ smpl_joints[:, dataset.smpl_data.smpl_joint_ids_45_pixie, :] + 1.0
246
+ ) * 0.5
247
+ in_tensor["smpl_joint"] = smpl_joints[:, dataset.smpl_data.smpl_joint_ids_24_pixie, :]
248
+
249
+ ghum_lmks = data["landmark"][:, SMPLX_object.ghum_smpl_pairs[:, 0], :2].to(device)
250
+ ghum_conf = data["landmark"][:, SMPLX_object.ghum_smpl_pairs[:, 0], -1].to(device)
251
+ smpl_lmks = smpl_joints_3d[:, SMPLX_object.ghum_smpl_pairs[:, 1], :2]
252
 
253
  # render optimized mesh as normal [-1,1]
254
  in_tensor["T_normal_F"], in_tensor["T_normal_B"] = dataset.render_normal(
255
+ smpl_verts * torch.tensor([1.0, -1.0, -1.0]).to(device),
256
+ in_tensor["smpl_faces"],
257
  )
258
 
259
+ T_mask_F, T_mask_B = dataset.render.get_image(type="mask")
260
+
261
  with torch.no_grad():
262
  in_tensor["normal_F"], in_tensor["normal_B"] = normal_net.netG(in_tensor)
263
 
264
+ diff_F_smpl = torch.abs(in_tensor["T_normal_F"] - in_tensor["normal_F"])
265
+ diff_B_smpl = torch.abs(in_tensor["T_normal_B"] - in_tensor["normal_B"])
266
+
267
+ # silhouette loss
268
+ smpl_arr = torch.cat([T_mask_F, T_mask_B], dim=-1)
269
+ gt_arr = in_tensor["mask"].repeat(1, 1, 2)
270
+ diff_S = torch.abs(smpl_arr - gt_arr)
271
+ losses["silhouette"]["value"] = diff_S.mean()
272
+
273
+ # large cloth_overlap --> big difference between body and cloth mask
274
+ # for loose clothing, reply more on landmarks instead of silhouette+normal loss
275
+ cloth_overlap = diff_S.sum(dim=[1, 2]) / gt_arr.sum(dim=[1, 2])
276
+ cloth_overlap_flag = cloth_overlap > cfg.cloth_overlap_thres
277
+ losses["joint"]["weight"] = [50.0 if flag else 5.0 for flag in cloth_overlap_flag]
278
+
279
+ # small body_overlap --> large occlusion or out-of-frame
280
+ # for highly occluded body, reply only on high-confidence landmarks, no silhouette+normal loss
281
+
282
+ # BUG: PyTorch3D silhouette renderer generates dilated mask
283
+ bg_value = in_tensor["T_normal_F"][0, 0, 0, 0]
284
+ smpl_arr_fake = torch.cat([
285
+ in_tensor["T_normal_F"][:, 0].ne(bg_value).float(),
286
+ in_tensor["T_normal_B"][:, 0].ne(bg_value).float()
287
+ ],
288
+ dim=-1)
289
+
290
+ body_overlap = (gt_arr * smpl_arr_fake.gt(0.0)
291
+ ).sum(dim=[1, 2]) / smpl_arr_fake.gt(0.0).sum(dim=[1, 2])
292
+ body_overlap_mask = (gt_arr * smpl_arr_fake).unsqueeze(1)
293
+ body_overlap_flag = body_overlap < cfg.body_overlap_thres
294
+
295
+ losses["normal"]["value"] = (
296
+ diff_F_smpl * body_overlap_mask[..., :512] +
297
+ diff_B_smpl * body_overlap_mask[..., 512:]
298
+ ).mean() / 2.0
299
+
300
+ losses["silhouette"]["weight"] = [0 if flag else 1.0 for flag in body_overlap_flag]
301
+ occluded_idx = torch.where(body_overlap_flag)[0]
302
+ ghum_conf[occluded_idx] *= ghum_conf[occluded_idx] > 0.95
303
+ losses["joint"]["value"] = (torch.norm(ghum_lmks - smpl_lmks, dim=2) *
304
+ ghum_conf).mean(dim=1)
305
+
306
+ # Weighted sum of the losses
307
+ smpl_loss = 0.0
308
+ pbar_desc = "Body Fitting -- "
309
+ for k in ["normal", "silhouette", "joint"]:
310
+ per_loop_loss = (losses[k]["value"] *
311
+ torch.tensor(losses[k]["weight"]).to(device)).mean()
312
+ pbar_desc += f"{k}: {per_loop_loss:.3f} | "
313
+ smpl_loss += per_loop_loss
314
+ pbar_desc += f"Total: {smpl_loss:.3f}"
315
+ loose_str = ''.join([str(j) for j in cloth_overlap_flag.int().tolist()])
316
+ occlude_str = ''.join([str(j) for j in body_overlap_flag.int().tolist()])
317
+ pbar_desc += colored(f"| loose:{loose_str}, occluded:{occlude_str}", "yellow")
318
+ loop_smpl.set_description(pbar_desc)
319
+
320
+ # save intermediate results
321
+ if (i == fitting_step - 1):
322
+
323
+ per_loop_lst.extend([
324
+ in_tensor["image"],
325
+ in_tensor["T_normal_F"],
326
+ in_tensor["normal_F"],
327
+ diff_S[:, :, :512].unsqueeze(1).repeat(1, 3, 1, 1),
328
+ ])
329
+ per_loop_lst.extend([
330
+ in_tensor["image"],
331
+ in_tensor["T_normal_B"],
332
+ in_tensor["normal_B"],
333
+ diff_S[:, :, 512:].unsqueeze(1).repeat(1, 3, 1, 1),
334
+ ])
335
+ per_data_lst.append(
336
+ get_optim_grid_image(per_loop_lst, None, nrow=N_body * 2, type="smpl")
337
+ )
338
 
339
+ smpl_loss.backward()
340
+ optimizer_smpl.step()
341
+ scheduler_smpl.step(smpl_loss)
342
 
343
+ in_tensor["smpl_verts"] = smpl_verts * torch.tensor([1.0, 1.0, -1.0]).to(device)
344
+ in_tensor["smpl_faces"] = in_tensor["smpl_faces"][:, :, [0, 2, 1]]
345
 
346
+ per_data_lst[-1].save(osp.join(out_dir, cfg.name, f"png/{data['name']}_smpl.png"))
347
 
348
+ img_crop_path = osp.join(out_dir, cfg.name, "png", f"{data['name']}_crop.png")
349
+ torchvision.utils.save_image(
350
+ torch.cat([
351
+ data["img_crop"][:, :3], (in_tensor['normal_F'].detach().cpu() + 1.0) * 0.5,
352
+ (in_tensor['normal_B'].detach().cpu() + 1.0) * 0.5
353
+ ],
354
+ dim=3), img_crop_path
355
+ )
356
 
357
+ rgb_norm_F = blend_rgb_norm(in_tensor["normal_F"], data)
358
+ rgb_norm_B = blend_rgb_norm(in_tensor["normal_B"], data)
359
 
360
+ img_overlap_path = osp.join(out_dir, cfg.name, f"png/{data['name']}_overlap.png")
361
+ torchvision.utils.save_image(
362
+ torch.cat([data["img_raw"], rgb_norm_F, rgb_norm_B], dim=-1) / 255., img_overlap_path
363
+ )
 
364
 
365
+ smpl_obj_lst = []
 
 
 
 
 
 
 
 
366
 
367
+ for idx in range(N_body):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
368
 
369
+ smpl_obj = trimesh.Trimesh(
370
+ in_tensor["smpl_verts"].detach().cpu()[idx] * torch.tensor([1.0, -1.0, 1.0]),
371
+ in_tensor["smpl_faces"].detach().cpu()[0][:, [0, 2, 1]],
372
+ process=False,
373
+ maintains_order=True,
374
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
375
 
376
+ smpl_obj_path = f"{out_dir}/{cfg.name}/obj/{data['name']}_smpl_{idx:02d}.obj"
377
+
378
+ if not osp.exists(smpl_obj_path):
379
+ smpl_obj.export(smpl_obj_path)
380
+ smpl_obj.export(smpl_obj_path.replace(".obj", ".glb"))
381
+ smpl_info = {
382
+ "betas":
383
+ optimed_betas[idx].detach().cpu().unsqueeze(0),
384
+ "body_pose":
385
+ rotation_matrix_to_angle_axis(optimed_pose_mat[idx].detach()).cpu().unsqueeze(0),
386
+ "global_orient":
387
+ rotation_matrix_to_angle_axis(optimed_orient_mat[idx].detach()).cpu().unsqueeze(0),
388
+ "transl":
389
+ optimed_trans[idx].detach().cpu(),
390
+ "expression":
391
+ data["exp"][idx].cpu().unsqueeze(0),
392
+ "jaw_pose":
393
+ rotation_matrix_to_angle_axis(data["jaw_pose"][idx]).cpu().unsqueeze(0),
394
+ "left_hand_pose":
395
+ rotation_matrix_to_angle_axis(data["left_hand_pose"][idx]).cpu().unsqueeze(0),
396
+ "right_hand_pose":
397
+ rotation_matrix_to_angle_axis(data["right_hand_pose"][idx]).cpu().unsqueeze(0),
398
+ "scale":
399
+ data["scale"][idx].cpu(),
400
+ }
401
+ np.save(
402
+ smpl_obj_path.replace(".obj", ".npy"),
403
+ smpl_info,
404
+ allow_pickle=True,
405
  )
406
+ smpl_obj_lst.append(smpl_obj)
407
 
408
+ del optimizer_smpl
409
+ del optimed_betas
410
+ del optimed_orient
411
+ del optimed_pose
412
+ del optimed_trans
413
 
414
+ torch.cuda.empty_cache()
 
 
 
 
415
 
416
+ # ------------------------------------------------------------------------------------------------------------------
417
+ # clothing refinement
418
 
419
+ per_data_lst = []
420
 
421
+ batch_smpl_verts = in_tensor["smpl_verts"].detach() * torch.tensor([1.0, -1.0, 1.0],
422
+ device=device)
423
+ batch_smpl_faces = in_tensor["smpl_faces"].detach()[:, :, [0, 2, 1]]
 
 
 
424
 
425
+ in_tensor["depth_F"], in_tensor["depth_B"] = dataset.render_depth(
426
+ batch_smpl_verts, batch_smpl_faces
427
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
428
 
429
+ per_loop_lst = []
 
 
 
 
430
 
431
+ in_tensor["BNI_verts"] = []
432
+ in_tensor["BNI_faces"] = []
433
+ in_tensor["body_verts"] = []
434
+ in_tensor["body_faces"] = []
435
 
436
+ for idx in range(N_body):
 
437
 
438
+ final_path = f"{out_dir}/{cfg.name}/obj/{data['name']}_{idx}_full.obj"
439
 
440
+ side_mesh = smpl_obj_lst[idx].copy()
441
+ face_mesh = smpl_obj_lst[idx].copy()
442
+ hand_mesh = smpl_obj_lst[idx].copy()
443
+ smplx_mesh = smpl_obj_lst[idx].copy()
444
 
445
+ # save normals, depths and masks
446
+ BNI_dict = save_normal_tensor(
447
+ in_tensor,
448
+ idx,
449
+ osp.join(out_dir, cfg.name, f"BNI/{data['name']}_{idx}"),
450
+ cfg.bni.thickness,
451
  )
452
 
453
+ # BNI process
454
+ BNI_object = BNI(
455
+ dir_path=osp.join(out_dir, cfg.name, "BNI"),
456
+ name=data["name"],
457
+ BNI_dict=BNI_dict,
458
+ cfg=cfg.bni,
459
+ device=device
460
+ )
461
 
462
+ BNI_object.extract_surface(False)
 
 
 
463
 
464
+ in_tensor["body_verts"].append(torch.tensor(smpl_obj_lst[idx].vertices).float())
465
+ in_tensor["body_faces"].append(torch.tensor(smpl_obj_lst[idx].faces).long())
466
 
467
+ # requires shape completion when low overlap
468
+ # replace SMPL by completed mesh as side_mesh
469
 
470
+ if cfg.bni.use_ifnet:
 
 
 
471
 
472
+ side_mesh_path = f"{out_dir}/{cfg.name}/obj/{data['name']}_{idx}_IF.obj"
 
 
 
 
 
 
473
 
474
+ side_mesh = apply_face_mask(side_mesh, ~SMPLX_object.smplx_eyeball_fid_mask)
475
+
476
+ # mesh completion via IF-net
477
+ in_tensor.update(
478
+ dataset.depth_to_voxel({
479
+ "depth_F": BNI_object.F_depth.unsqueeze(0), "depth_B":
480
+ BNI_object.B_depth.unsqueeze(0)
481
+ })
482
  )
483
 
484
+ occupancies = VoxelGrid.from_mesh(side_mesh, cfg.vol_res, loc=[
485
+ 0,
486
+ ] * 3, scale=2.0).data.transpose(2, 1, 0)
487
+ occupancies = np.flip(occupancies, axis=1)
488
+
489
+ in_tensor["body_voxels"] = torch.tensor(occupancies.copy()
490
+ ).float().unsqueeze(0).to(device)
491
+
492
+ with torch.no_grad():
493
+ sdf = ifnet.reconEngine(netG=ifnet.netG, batch=in_tensor)
494
+ verts_IF, faces_IF = ifnet.reconEngine.export_mesh(sdf)
495
 
496
+ if ifnet.clean_mesh_flag:
497
+ verts_IF, faces_IF = clean_mesh(verts_IF, faces_IF)
498
 
499
+ side_mesh = trimesh.Trimesh(verts_IF, faces_IF)
500
+ side_mesh = remesh_laplacian(side_mesh, side_mesh_path)
501
 
502
+ else:
503
+ side_mesh = apply_vertex_mask(
504
+ side_mesh,
505
+ (
506
+ SMPLX_object.front_flame_vertex_mask + SMPLX_object.smplx_mano_vertex_mask +
507
+ SMPLX_object.eyeball_vertex_mask
508
+ ).eq(0).float(),
509
+ )
510
 
511
+ #register side_mesh to BNI surfaces
512
+ side_mesh = Meshes(
513
+ verts=[torch.tensor(side_mesh.vertices).float()],
514
+ faces=[torch.tensor(side_mesh.faces).long()],
515
+ ).to(device)
516
+ sm = SubdivideMeshes(side_mesh)
517
+ side_mesh = register(BNI_object.F_B_trimesh, sm(side_mesh), device)
518
+
519
+ side_verts = torch.tensor(side_mesh.vertices).float().to(device)
520
+ side_faces = torch.tensor(side_mesh.faces).long().to(device)
521
+
522
+ # Possion Fusion between SMPLX and BNI
523
+ # 1. keep the faces invisible to front+back cameras
524
+ # 2. keep the front-FLAME+MANO faces
525
+ # 3. remove eyeball faces
526
+
527
+ # export intermediate meshes
528
+ BNI_object.F_B_trimesh.export(f"{out_dir}/{cfg.name}/obj/{data['name']}_{idx}_BNI.obj")
529
+ full_lst = []
530
+
531
+ if "face" in cfg.bni.use_smpl:
532
+
533
+ # only face
534
+ face_mesh = apply_vertex_mask(face_mesh, SMPLX_object.front_flame_vertex_mask)
535
+ face_mesh.vertices = face_mesh.vertices - np.array([0, 0, cfg.bni.thickness])
536
+
537
+ # remove face neighbor triangles
538
+ BNI_object.F_B_trimesh = part_removal(
539
+ BNI_object.F_B_trimesh,
540
+ face_mesh,
541
+ cfg.bni.face_thres,
542
+ device,
543
+ smplx_mesh,
544
+ region="face"
545
+ )
546
+ side_mesh = part_removal(
547
+ side_mesh, face_mesh, cfg.bni.face_thres, device, smplx_mesh, region="face"
548
+ )
549
+ face_mesh.export(f"{out_dir}/{cfg.name}/obj/{data['name']}_{idx}_face.obj")
550
+ full_lst += [face_mesh]
551
 
552
+ if "hand" in cfg.bni.use_smpl and (True in data['hands_visibility'][idx]):
553
 
554
+ hand_mask = torch.zeros(SMPLX_object.smplx_verts.shape[0], )
555
+ if data['hands_visibility'][idx][0]:
556
+ hand_mask.index_fill_(
557
+ 0, torch.tensor(SMPLX_object.smplx_mano_vid_dict["left_hand"]), 1.0
558
+ )
559
+ if data['hands_visibility'][idx][1]:
560
+ hand_mask.index_fill_(
561
+ 0, torch.tensor(SMPLX_object.smplx_mano_vid_dict["right_hand"]), 1.0
562
  )
563
 
564
+ # only hands
565
+ hand_mesh = apply_vertex_mask(hand_mesh, hand_mask)
566
+
567
+ # remove hand neighbor triangles
568
+ BNI_object.F_B_trimesh = part_removal(
569
+ BNI_object.F_B_trimesh,
570
+ hand_mesh,
571
+ cfg.bni.hand_thres,
572
+ device,
573
+ smplx_mesh,
574
+ region="hand"
575
+ )
576
+ side_mesh = part_removal(
577
+ side_mesh, hand_mesh, cfg.bni.hand_thres, device, smplx_mesh, region="hand"
578
+ )
579
+ hand_mesh.export(f"{out_dir}/{cfg.name}/obj/{data['name']}_{idx}_hand.obj")
580
+ full_lst += [hand_mesh]
581
 
582
+ full_lst += [BNI_object.F_B_trimesh]
 
583
 
584
+ # initial side_mesh could be SMPLX or IF-net
585
+ side_mesh = part_removal(
586
+ side_mesh, sum(full_lst), 2e-2, device, smplx_mesh, region="", clean=False
587
+ )
588
 
589
+ full_lst += [side_mesh]
 
590
 
591
+ # # export intermediate meshes
592
+ BNI_object.F_B_trimesh.export(f"{out_dir}/{cfg.name}/obj/{data['name']}_{idx}_BNI.obj")
593
+ side_mesh.export(f"{out_dir}/{cfg.name}/obj/{data['name']}_{idx}_side.obj")
594
 
595
+ final_mesh = poisson(
596
+ sum(full_lst),
597
+ final_path,
598
+ cfg.bni.poisson_depth,
599
+ )
600
+ print(
601
+ colored(f"\n Poisson completion to {Format.start} {final_path} {Format.end}", "yellow")
602
+ )
603
 
604
+ dataset.render.load_meshes(final_mesh.vertices, final_mesh.faces)
605
+ rotate_recon_lst = dataset.render.get_image(cam_type="four")
606
+ per_loop_lst.extend([in_tensor['image'][idx:idx + 1]] + rotate_recon_lst)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
607
 
608
+ if cfg.bni.texture_src == 'image':
609
 
610
+ # coloring the final mesh (front: RGB pixels, back: normal colors)
611
+ final_colors = query_color(
612
+ torch.tensor(final_mesh.vertices).float(),
613
+ torch.tensor(final_mesh.faces).long(),
614
+ in_tensor["image"][idx:idx + 1],
615
+ device=device,
616
  )
617
+ final_mesh.visual.vertex_colors = final_colors
618
+ final_mesh.export(final_path)
619
+ final_mesh.export(final_path.replace(".obj", ".glb"))
620
 
621
+ elif cfg.bni.texture_src == 'SD':
622
 
623
+ # !TODO: add texture from Stable Diffusion
624
+ pass
 
 
 
625
 
626
+ if len(per_loop_lst) > 0:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
627
 
628
+ per_data_lst.append(get_optim_grid_image(per_loop_lst, None, nrow=5, type="cloth"))
629
+ per_data_lst[-1].save(osp.join(out_dir, cfg.name, f"png/{data['name']}_cloth.png"))
630
 
631
+ # for video rendering
632
+ in_tensor["BNI_verts"].append(torch.tensor(final_mesh.vertices).float())
633
+ in_tensor["BNI_faces"].append(torch.tensor(final_mesh.faces).long())
634
 
635
+ os.makedirs(osp.join(out_dir, cfg.name, "vid"), exist_ok=True)
636
+ in_tensor["uncrop_param"] = data["uncrop_param"]
637
+ in_tensor["img_raw"] = data["img_raw"]
638
+ torch.save(in_tensor, osp.join(out_dir, cfg.name, f"vid/{data['name']}_in_tensor.pt"))
639
 
640
+ smpl_glb_path = smpl_obj_path.replace(".obj", ".glb")
641
+ # smpl_npy_path = smpl_obj_path.replace(".obj", ".npy")
642
+ refine_obj_path = final_path
643
+ refine_glb_path = final_path.replace(".obj", ".glb")
644
+ overlap_path = img_overlap_path
645
+ vis_tensor_path = osp.join(out_dir, cfg.name, f"vid/{data['name']}_in_tensor.pt")
646
 
647
+ # clean all the variables
648
+ for element in dir():
649
+ if 'path' not in element:
650
+ del locals()[element]
651
 
652
+ import gc
653
+ gc.collect()
654
+ torch.cuda.empty_cache()
655
+
656
+ return [
657
+ smpl_glb_path, refine_glb_path, refine_obj_path, overlap_path, vis_tensor_path
658
+ ]
apps/multi_render.py DELETED
@@ -1,25 +0,0 @@
1
- import argparse
2
-
3
- import torch
4
-
5
- from lib.common.render import Render
6
-
7
- root = "./results/econ/vid"
8
-
9
- # loading cfg file
10
- parser = argparse.ArgumentParser()
11
- parser.add_argument("-n", "--name", type=str, default="")
12
- parser.add_argument("-g", "--gpu", type=int, default=0)
13
- args = parser.parse_args()
14
-
15
- in_tensor = torch.load(f"{root}/{args.name}_in_tensor.pt")
16
-
17
- render = Render(size=512, device=torch.device(f"cuda:{args.gpu}"))
18
-
19
- # visualize the final results in self-rotation mode
20
- verts_lst = in_tensor["body_verts"] + in_tensor["BNI_verts"]
21
- faces_lst = in_tensor["body_faces"] + in_tensor["BNI_faces"]
22
-
23
- # self-rotated video
24
- render.load_meshes(verts_lst, faces_lst)
25
- render.get_rendered_video_multi(in_tensor, f"{root}/{args.name}_cloth.mp4")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docker-compose.yaml DELETED
@@ -1,19 +0,0 @@
1
- # build Image from Docker Hub
2
- version: "2.4"
3
- services:
4
- econ:
5
- container_name: econ-container
6
- image: teddy12155555/econ:v1
7
- runtime: nvidia
8
- environment:
9
- - NVIDIA_VISIBLE_DEVICES=all
10
- - DISPLAY=${DISPLAY}
11
- stdin_open: true
12
- tty: true
13
- volumes:
14
- - .:/root/code
15
- - /tmp/.X11-unix:/tmp/.X11-unix
16
- ports:
17
- - "8000:8000"
18
- privileged: true
19
- command: "bash"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/installation-docker.md DELETED
@@ -1,80 +0,0 @@
1
- ## Getting started
2
-
3
- Start by cloning the repo:
4
-
5
- ```bash
6
- git clone --depth 1 git@github.com:YuliangXiu/ECON.git
7
- cd ECON
8
- ```
9
- ## Environment
10
- - **GPU Memory > 12GB**
11
-
12
- start with [docker compose](https://docs.docker.com/compose/)
13
- ```bash
14
- # you can change your container name by passing --name "parameter"
15
- docker compose run [--name myecon] econ
16
- ```
17
-
18
- ## Docker container's shell
19
- ```bash
20
- # activate the pre-build env
21
- cd code
22
- conda activate econ
23
-
24
- # install libmesh & libvoxelize
25
- cd lib/common/libmesh
26
- python setup.py build_ext --inplace
27
- cd ../libvoxelize
28
- python setup.py build_ext --inplace
29
- ```
30
-
31
- ## Register at [ICON's website](https://icon.is.tue.mpg.de/)
32
-
33
- ![Register](../assets/register.png)
34
- Required:
35
-
36
- - [SMPL](http://smpl.is.tue.mpg.de/): SMPL Model (Male, Female)
37
- - [SMPL-X](http://smpl-x.is.tue.mpg.de/): SMPL-X Model, used for training
38
- - [SMPLIFY](http://smplify.is.tue.mpg.de/): SMPL Model (Neutral)
39
- - [PIXIE](https://icon.is.tue.mpg.de/user.php): PIXIE SMPL-X estimator
40
-
41
- :warning: Click **Register now** on all dependencies, then you can download them all with **ONE** account.
42
-
43
- ## Downloading required models and extra data
44
-
45
- ```bash
46
- cd ~/code
47
- bash fetch_data.sh # requires username and password
48
- ```
49
- ## :whale2: **todo**
50
- - **Image Environment Infos**
51
- - Ubuntu 18
52
- - CUDA = 11.3
53
- - Python = 3.8
54
- - [X] pre-built image with docker compose
55
- - [ ] docker run command, Dockerfile
56
- - [ ] verify on WSL (Windows)
57
-
58
- ## Citation
59
-
60
- :+1: Please consider citing these awesome HPS approaches: PyMAF-X, PIXIE
61
-
62
-
63
- ```
64
- @article{pymafx2022,
65
- title={PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images},
66
- author={Zhang, Hongwen and Tian, Yating and Zhang, Yuxiang and Li, Mengcheng and An, Liang and Sun, Zhenan and Liu, Yebin},
67
- journal={arXiv preprint arXiv:2207.06400},
68
- year={2022}
69
- }
70
-
71
-
72
- @inproceedings{PIXIE:2021,
73
- title={Collaborative Regression of Expressive Bodies using Moderation},
74
- author={Yao Feng and Vasileios Choutas and Timo Bolkart and Dimitrios Tzionas and Michael J. Black},
75
- booktitle={International Conference on 3D Vision (3DV)},
76
- year={2021}
77
- }
78
-
79
-
80
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/installation-ubuntu.md DELETED
@@ -1,80 +0,0 @@
1
- ## Getting started
2
-
3
- Start by cloning the repo:
4
-
5
- ```bash
6
- git clone --depth 1 git@github.com:YuliangXiu/ECON.git
7
- cd ECON
8
- ```
9
-
10
- ## Environment
11
-
12
- - Ubuntu 20 / 18, (Windows as well, see [issue#7](https://github.com/YuliangXiu/ECON/issues/7))
13
- - **CUDA=11.6, GPU Memory > 12GB**
14
- - Python = 3.8
15
- - PyTorch >= 1.13.0 (official [Get Started](https://pytorch.org/get-started/locally/))
16
- - Cupy >= 11.3.0 (offcial [Installation](https://docs.cupy.dev/en/stable/install.html#installing-cupy-from-pypi))
17
- - PyTorch3D = 0.7.1 (official [INSTALL.md](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md), recommend [install-from-local-clone](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md#2-install-from-a-local-clone))
18
-
19
- ```bash
20
-
21
- sudo apt-get install libeigen3-dev ffmpeg
22
-
23
- # install required packages
24
- cd ECON
25
- conda env create -f environment.yaml
26
- conda activate econ
27
- pip install -r requirements.txt
28
-
29
- # the installation(incl. compilation) of PyTorch3D will take ~20min
30
- pip install git+https://github.com/facebookresearch/pytorch3d.git@v0.7.1
31
-
32
- # install libmesh & libvoxelize
33
- cd lib/common/libmesh
34
- python setup.py build_ext --inplace
35
- cd ../libvoxelize
36
- python setup.py build_ext --inplace
37
- ```
38
-
39
- ## Register at [ICON's website](https://icon.is.tue.mpg.de/)
40
-
41
- ![Register](../assets/register.png)
42
- Required:
43
-
44
- - [SMPL](http://smpl.is.tue.mpg.de/): SMPL Model (Male, Female)
45
- - [SMPL-X](http://smpl-x.is.tue.mpg.de/): SMPL-X Model, used for training
46
- - [SMPLIFY](http://smplify.is.tue.mpg.de/): SMPL Model (Neutral)
47
- - [PIXIE](https://icon.is.tue.mpg.de/user.php): PIXIE SMPL-X estimator
48
-
49
- :warning: Click **Register now** on all dependencies, then you can download them all with **ONE** account.
50
-
51
- ## Downloading required models and extra data
52
-
53
- ```bash
54
- cd ECON
55
- bash fetch_data.sh # requires username and password
56
- ```
57
-
58
- ## Citation
59
-
60
- :+1: Please consider citing these awesome HPS approaches: PyMAF-X, PIXIE
61
-
62
-
63
- ```
64
- @article{pymafx2022,
65
- title={PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images},
66
- author={Zhang, Hongwen and Tian, Yating and Zhang, Yuxiang and Li, Mengcheng and An, Liang and Sun, Zhenan and Liu, Yebin},
67
- journal={arXiv preprint arXiv:2207.06400},
68
- year={2022}
69
- }
70
-
71
-
72
- @inproceedings{PIXIE:2021,
73
- title={Collaborative Regression of Expressive Bodies using Moderation},
74
- author={Yao Feng and Vasileios Choutas and Timo Bolkart and Dimitrios Tzionas and Michael J. Black},
75
- booktitle={International Conference on 3D Vision (3DV)},
76
- year={2021}
77
- }
78
-
79
-
80
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/installation-windows.md DELETED
@@ -1,100 +0,0 @@
1
- # Windows installation tutorial
2
-
3
- Another [issue#16](https://github.com/YuliangXiu/ECON/issues/16) shows the whole process to deploy ECON on *Windows*
4
-
5
- ## Dependencies and Installation
6
-
7
- - Use [Anaconda](https://www.anaconda.com/products/distribution)
8
- - NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)
9
- - [Wget for Windows](https://eternallybored.org/misc/wget/1.21.3/64/wget.exe)
10
- - Create a new folder on your C drive and rename it "wget" and move the downloaded "wget.exe" over there.
11
- - Add the path to your wget folder to your system environment variables at `Environment Variables > System Variables Path > Edit environment variable`
12
-
13
- ![image](https://user-images.githubusercontent.com/34035011/210986038-39dbb7a1-12ef-4be9-9af4-5f658c6beb65.png)
14
-
15
- - Install [Git for Windows 64-bit](https://git-scm.com/download/win)
16
- - [Visual Studio Community 2022](https://visualstudio.microsoft.com/) (Make sure to check all the boxes as shown in the image below)
17
-
18
- ![image](https://user-images.githubusercontent.com/34035011/210983023-4e5a0024-68f0-4adb-8089-6ff598aec220.PNG)
19
-
20
-
21
-
22
- ## Getting started
23
-
24
- Start by cloning the repo:
25
-
26
- ```bash
27
- git clone https://github.com/yuliangxiu/ECON.git
28
- cd ECON
29
- ```
30
-
31
- ## Environment
32
-
33
- - Windows 10 / 11
34
- - **CUDA=11.3**
35
- - Python = 3.8
36
- - PyTorch >= 1.12.1 (official [Get Started](https://pytorch.org/get-started/locally/))
37
- - Cupy >= 11.3.0 (offcial [Installation](https://docs.cupy.dev/en/stable/install.html#installing-cupy-from-pypi))
38
- - PyTorch3D = 0.7.1 (official [INSTALL.md](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md), recommend [install-from-local-clone](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md#2-install-from-a-local-clone))
39
-
40
- ```bash
41
- # install required packages
42
- cd ECON
43
- conda env create -f environment-windows.yaml
44
- conda activate econ
45
-
46
- # install pytorch and cupy
47
- pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
48
- pip install -r requirements.txt
49
- pip install cupy-cuda11x
50
- pip install git+https://github.com/facebookresearch/pytorch3d.git@v0.7.1
51
-
52
- # install libmesh & libvoxelize
53
- cd lib/common/libmesh
54
- python setup.py build_ext --inplace
55
- cd ../libvoxelize
56
- python setup.py build_ext --inplace
57
- ```
58
-
59
- ## Register at [ICON's website](https://icon.is.tue.mpg.de/)
60
-
61
- ![Register](../assets/register.png)
62
- Required:
63
-
64
- - [SMPL](http://smpl.is.tue.mpg.de/): SMPL Model (Male, Female)
65
- - [SMPL-X](http://smpl-x.is.tue.mpg.de/): SMPL-X Model, used for training
66
- - [SMPLIFY](http://smplify.is.tue.mpg.de/): SMPL Model (Neutral)
67
- - [PIXIE](https://icon.is.tue.mpg.de/user.php): PIXIE SMPL-X estimator
68
-
69
- :warning: Click **Register now** on all dependencies, then you can download them all with **ONE** account.
70
-
71
- ## Downloading required models and extra data (make sure to install git and wget for windows for this to work)
72
-
73
- ```bash
74
- cd ECON
75
- bash fetch_data.sh # requires username and password
76
- ```
77
-
78
- ## Citation
79
-
80
- :+1: Please consider citing these awesome HPS approaches: PyMAF-X, PIXIE
81
-
82
-
83
- ```
84
- @article{pymafx2022,
85
- title={PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images},
86
- author={Zhang, Hongwen and Tian, Yating and Zhang, Yuxiang and Li, Mengcheng and An, Liang and Sun, Zhenan and Liu, Yebin},
87
- journal={arXiv preprint arXiv:2207.06400},
88
- year={2022}
89
- }
90
-
91
-
92
- @inproceedings{PIXIE:2021,
93
- title={Collaborative Regression of Expressive Bodies using Moderation},
94
- author={Yao Feng and Vasileios Choutas and Timo Bolkart and Dimitrios Tzionas and Michael J. Black},
95
- booktitle={International Conference on 3D Vision (3DV)},
96
- year={2021}
97
- }
98
-
99
-
100
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/testing.md DELETED
@@ -1,71 +0,0 @@
1
- # Evaluation
2
-
3
- ## Testing Data
4
-
5
- ![dataset](../assets/dataset.png)
6
-
7
- - OOD pose (CAPE, [download](https://github.com/YuliangXiu/ICON/blob/master/docs/evaluation.md#cape-testset)): [`pose.txt`](../pose.txt)
8
- - OOD outfits (RenderPeople, [link](https://renderpeople.com/)): [`loose.txt`](../loose.txt)
9
-
10
- ## Run the evaluation
11
-
12
- ```bash
13
- # Benchmark of ECON_{IF}, which uses IF-Net+ for completion
14
- export CUDA_VISIBLE_DEVICES=0; python -m apps.benchmark -ifnet
15
-
16
- # Benchmark of ECON_{EX}, which uses registered SMPL for completion
17
- export CUDA_VISIBLE_DEVICES=1; python -m apps.benchmark
18
-
19
- ```
20
-
21
- ## Benchmark
22
-
23
- | Method | $\text{ECON}_\text{IF}$ | $\text{ECON}_\text{EX}$ |
24
- | :---------: | :-----------------------: | :---------------------: |
25
- | | OOD poses (CAPE) | |
26
- | Chamfer(cm) | 0.996 | **0.926** |
27
- | P2S(cm) | 0.967 | **0.917** |
28
- | Normal(L2) | 0.0413 | **0.0367** |
29
- | | OOD oufits (RenderPeople) | |
30
- | Chamfer(cm) | 1.401 | **1.342** |
31
- | P2S(cm) | **1.422** | 1.458 |
32
- | Normal(L2) | 0.0516 | **0.0478** |
33
-
34
- **\*OOD: Out-of-Distribution**
35
-
36
- ## Citation
37
-
38
- :+1: Please cite these CAPE-related papers
39
-
40
- ```
41
-
42
- @inproceedings{xiu2022icon,
43
- title = {{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
44
- author = {Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
45
- booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
46
- month = {June},
47
- year = {2022},
48
- pages = {13296-13306}
49
- }
50
-
51
- @inproceedings{CAPE:CVPR:20,
52
- title = {{Learning to Dress 3D People in Generative Clothing}},
53
- author = {Ma, Qianli and Yang, Jinlong and Ranjan, Anurag and Pujades, Sergi and Pons-Moll, Gerard and Tang, Siyu and Black, Michael J.},
54
- booktitle = {Computer Vision and Pattern Recognition (CVPR)},
55
- month = June,
56
- year = {2020},
57
- month_numeric = {6}
58
- }
59
-
60
- @article{Pons-Moll:Siggraph2017,
61
- title = {ClothCap: Seamless 4D Clothing Capture and Retargeting},
62
- author = {Pons-Moll, Gerard and Pujades, Sergi and Hu, Sonny and Black, Michael},
63
- journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH)},
64
- volume = {36},
65
- number = {4},
66
- year = {2017},
67
- note = {Two first authors contributed equally},
68
- crossref = {},
69
- url = {http://dx.doi.org/10.1145/3072959.3073711}
70
- }
71
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/tricks.md DELETED
@@ -1,29 +0,0 @@
1
- ## Technical tricks to improve or accelerate ECON
2
-
3
- ### If the reconstructed geometry is not satisfying, play with the adjustable parameters in _config/econ.yaml_
4
-
5
- - `use_smpl: ["hand"]`
6
- - [ ]: don't use either hands or face parts from SMPL-X
7
- - ["hand"]: only use the **visible** hands from SMPL-X
8
- - ["hand", "face"]: use both **visible** hands and face from SMPL-X
9
- - `thickness: 2cm`
10
- - could be increased accordingly in case final reconstruction **xx_full.obj** looks flat
11
- - `k: 4`
12
- - could be reduced accordingly in case the surface of **xx_full.obj** has discontinous artifacts
13
- - `hps_type: PIXIE`
14
- - "pixie": more accurate for face and hands
15
- - "pymafx": more robust for challenging poses
16
- - `texture_src: image`
17
- - "image": direct mapping the aligned pixels to final mesh
18
- - "SD": use Stable Diffusion to generate full texture (TODO)
19
-
20
- ### To accelerate the inference, you could
21
-
22
- - `use_ifnet: False`
23
- - True: use IF-Nets+ for mesh completion ( $\text{ECON}_\text{IF}$ - Better quality, **~2min / img**)
24
- - False: use SMPL-X for mesh completion ( $\text{ECON}_\text{EX}$ - Faster speed, **~1.8min / img**)
25
-
26
- ```bash
27
- # For single-person image-based reconstruction (w/o all visualization steps, 1.5min)
28
- python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -novis
29
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
environment-windows.yaml DELETED
@@ -1,18 +0,0 @@
1
- name: econ
2
- channels:
3
- - nvidia
4
- - pytorch
5
- - conda-forge
6
- - fvcore
7
- - iopath
8
- - bottler
9
- - defaults
10
- dependencies:
11
- - python=3.8
12
- - pytorch-cuda=11.3
13
- - fvcore
14
- - iopath
15
- - cupy
16
- - cython
17
- - pip
18
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
environment.yaml DELETED
@@ -1,21 +0,0 @@
1
- name: econ
2
- channels:
3
- - pytorch
4
- - nvidia
5
- - conda-forge
6
- - fvcore
7
- - iopath
8
- - bottler
9
- - defaults
10
- dependencies:
11
- - python=3.8
12
- - pytorch-cuda=11.6
13
- - pytorch=1.13.0
14
- - nvidiacub
15
- - torchvision
16
- - fvcore
17
- - iopath
18
- - pyembree
19
- - cupy
20
- - cython
21
- - pip
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fetch_data.sh DELETED
@@ -1,60 +0,0 @@
1
- #!/bin/bash
2
- urle () { [[ "${1}" ]] || return 1; local LANG=C i x; for (( i = 0; i < ${#1}; i++ )); do x="${1:i:1}"; [[ "${x}" == [a-zA-Z0-9.~-] ]] && echo -n "${x}" || printf '%%%02X' "'${x}"; done; echo; }
3
-
4
- mkdir -p data/smpl_related/models
5
-
6
- # username and password input
7
- echo -e "\nYou need to register at https://icon.is.tue.mpg.de/, according to Installation Instruction."
8
- read -p "Username (ICON):" username
9
- read -p "Password (ICON):" password
10
- username=$(urle $username)
11
- password=$(urle $password)
12
-
13
- # SMPL (Male, Female)
14
- echo -e "\nDownloading SMPL..."
15
- wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=smpl&sfile=SMPL_python_v.1.0.0.zip&resume=1' -O './data/smpl_related/models/SMPL_python_v.1.0.0.zip' --no-check-certificate --continue
16
- unzip data/smpl_related/models/SMPL_python_v.1.0.0.zip -d data/smpl_related/models
17
- mv data/smpl_related/models/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl data/smpl_related/models/smpl/SMPL_FEMALE.pkl
18
- mv data/smpl_related/models/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl data/smpl_related/models/smpl/SMPL_MALE.pkl
19
- cd data/smpl_related/models
20
- rm -rf *.zip __MACOSX smpl/models smpl/smpl_webuser
21
- cd ../../..
22
-
23
- # SMPL (Neutral, from SMPLIFY)
24
- echo -e "\nDownloading SMPLify..."
25
- wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=smplify&sfile=mpips_smplify_public_v2.zip&resume=1' -O './data/smpl_related/models/mpips_smplify_public_v2.zip' --no-check-certificate --continue
26
- unzip data/smpl_related/models/mpips_smplify_public_v2.zip -d data/smpl_related/models
27
- mv data/smpl_related/models/smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl data/smpl_related/models/smpl/SMPL_NEUTRAL.pkl
28
- cd data/smpl_related/models
29
- rm -rf *.zip smplify_public
30
- cd ../../..
31
-
32
- # SMPL-X
33
- echo -e "\nDownloading SMPL-X..."
34
- wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=smplx&sfile=models_smplx_v1_1.zip&resume=1' -O './data/smpl_related/models/models_smplx_v1_1.zip' --no-check-certificate --continue
35
- unzip data/smpl_related/models/models_smplx_v1_1.zip -d data/smpl_related
36
- rm -f data/smpl_related/models/models_smplx_v1_1.zip
37
-
38
- # ECON
39
- echo -e "\nDownloading ECON..."
40
- wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=icon&sfile=econ_data.zip&resume=1' -O './data/econ_data.zip' --no-check-certificate --continue
41
- cd data && unzip econ_data.zip
42
- mv smpl_data smpl_related/
43
- rm -f econ_data.zip
44
- cd ..
45
-
46
- mkdir -p data/HPS
47
-
48
- # PIXIE
49
- echo -e "\nDownloading PIXIE..."
50
- wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=icon&sfile=HPS/pixie_data.zip&resume=1' -O './data/HPS/pixie_data.zip' --no-check-certificate --continue
51
- cd data/HPS && unzip pixie_data.zip
52
- rm -f pixie_data.zip
53
- cd ../..
54
-
55
- # PyMAF-X
56
- echo -e "\nDownloading PyMAF-X..."
57
- wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=icon&sfile=HPS/pymafx_data.zip&resume=1' -O './data/HPS/pymafx_data.zip' --no-check-certificate --continue
58
- cd data/HPS && unzip pymafx_data.zip
59
- rm -f pymafx_data.zip
60
- cd ../..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
lib/dataset/TestDataset.py CHANGED
@@ -49,8 +49,7 @@ ImageFile.LOAD_TRUNCATED_IMAGES = True
49
  class TestDataset:
50
  def __init__(self, cfg, device):
51
 
52
- self.image_dir = cfg["image_dir"]
53
- self.seg_dir = cfg["seg_dir"]
54
  self.use_seg = cfg["use_seg"]
55
  self.hps_type = cfg["hps_type"]
56
  self.smpl_type = "smplx"
@@ -60,11 +59,7 @@ class TestDataset:
60
 
61
  self.device = device
62
 
63
- keep_lst = sorted(glob.glob(f"{self.image_dir}/*"))
64
- img_fmts = ["jpg", "png", "jpeg", "JPG", "bmp", "exr"]
65
-
66
- self.subject_list = sorted([item for item in keep_lst if item.split(".")[-1] in img_fmts],
67
- reverse=False)
68
 
69
  # smpl related
70
  self.smpl_data = SMPLX()
 
49
  class TestDataset:
50
  def __init__(self, cfg, device):
51
 
52
+ self.image_path = cfg["image_path"]
 
53
  self.use_seg = cfg["use_seg"]
54
  self.hps_type = cfg["hps_type"]
55
  self.smpl_type = "smplx"
 
59
 
60
  self.device = device
61
 
62
+ self.subject_list = [self.image_path]
 
 
 
 
63
 
64
  # smpl related
65
  self.smpl_data = SMPLX()
lib/dataset/mesh_util.py CHANGED
@@ -30,6 +30,7 @@ from pytorch3d.loss import mesh_laplacian_smoothing, mesh_normal_consistency
30
  from pytorch3d.renderer.mesh import rasterize_meshes
31
  from pytorch3d.structures import Meshes
32
  from scipy.spatial import cKDTree
 
33
 
34
  import lib.smplx as smplx
35
  from lib.common.render_utils import Pytorch3dRasterizer, face_vertices
@@ -43,28 +44,70 @@ class Format:
43
  class SMPLX:
44
  def __init__(self):
45
 
46
- self.current_dir = osp.join(osp.dirname(__file__), "../../data/smpl_related")
47
-
48
- self.smpl_verts_path = osp.join(self.current_dir, "smpl_data/smpl_verts.npy")
49
- self.smpl_faces_path = osp.join(self.current_dir, "smpl_data/smpl_faces.npy")
50
- self.smplx_verts_path = osp.join(self.current_dir, "smpl_data/smplx_verts.npy")
51
- self.smplx_faces_path = osp.join(self.current_dir, "smpl_data/smplx_faces.npy")
52
- self.cmap_vert_path = osp.join(self.current_dir, "smpl_data/smplx_cmap.npy")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
- self.smplx_to_smplx_path = osp.join(self.current_dir, "smpl_data/smplx_to_smpl.pkl")
 
 
 
 
55
 
56
- self.smplx_eyeball_fid_path = osp.join(self.current_dir, "smpl_data/eyeball_fid.npy")
57
- self.smplx_fill_mouth_fid_path = osp.join(self.current_dir, "smpl_data/fill_mouth_fid.npy")
58
- self.smplx_flame_vid_path = osp.join(
59
- self.current_dir, "smpl_data/FLAME_SMPLX_vertex_ids.npy"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  )
61
- self.smplx_mano_vid_path = osp.join(self.current_dir, "smpl_data/MANO_SMPLX_vertex_ids.pkl")
62
  self.smpl_vert_seg_path = osp.join(
63
  osp.dirname(__file__), "../../lib/common/smpl_vert_segmentation.json"
64
  )
65
- self.front_flame_path = osp.join(self.current_dir, "smpl_data/FLAME_face_mask_ids.npy")
66
- self.smplx_vertex_lmkid_path = osp.join(
67
- self.current_dir, "smpl_data/smplx_vertex_lmkid.npy"
 
 
 
 
 
 
68
  )
69
 
70
  self.smplx_faces = np.load(self.smplx_faces_path)
@@ -106,8 +149,6 @@ class SMPLX:
106
 
107
  self.smplx_to_smpl = cPickle.load(open(self.smplx_to_smplx_path, "rb"))
108
 
109
- self.model_dir = osp.join(self.current_dir, "models")
110
-
111
  self.ghum_smpl_pairs = torch.tensor([(0, 24), (2, 26), (5, 25), (7, 28), (8, 27), (11, 16),
112
  (12, 17), (13, 18), (14, 19), (15, 20), (16, 21),
113
  (17, 39), (18, 44), (19, 36), (20, 41), (21, 35),
@@ -151,7 +192,7 @@ class SMPLX:
151
  model_init_params = dict(
152
  gender="male",
153
  model_type="smplx",
154
- model_path=SMPLX().model_dir,
155
  create_global_orient=False,
156
  create_body_pose=False,
157
  create_betas=False,
 
30
  from pytorch3d.renderer.mesh import rasterize_meshes
31
  from pytorch3d.structures import Meshes
32
  from scipy.spatial import cKDTree
33
+ from huggingface_hub import hf_hub_download
34
 
35
  import lib.smplx as smplx
36
  from lib.common.render_utils import Pytorch3dRasterizer, face_vertices
 
44
  class SMPLX:
45
  def __init__(self):
46
 
47
+ self.smpl_verts_path = hf_hub_download(
48
+ repo_id="Yuliang/SMPLX",
49
+ use_auth_token=os.environ["ICON"],
50
+ filename="smpl_data/smpl_verts.npy"
51
+ )
52
+ self.smpl_faces_path = hf_hub_download(
53
+ repo_id="Yuliang/SMPLX",
54
+ use_auth_token=os.environ["ICON"],
55
+ filename="smpl_data/smpl_faces.npy"
56
+ )
57
+ self.smplx_verts_path = hf_hub_download(
58
+ repo_id="Yuliang/SMPLX",
59
+ use_auth_token=os.environ["ICON"],
60
+ filename="smpl_data/smplx_verts.npy"
61
+ )
62
+ self.smplx_faces_path = hf_hub_download(
63
+ repo_id="Yuliang/SMPLX",
64
+ use_auth_token=os.environ["ICON"],
65
+ filename="smpl_data/smplx_faces.npy"
66
+ )
67
+ self.cmap_vert_path = hf_hub_download(
68
+ repo_id="Yuliang/SMPLX",
69
+ use_auth_token=os.environ["ICON"],
70
+ filename="smpl_data/smplx_cmap.npy"
71
+ )
72
 
73
+ self.smplx_to_smplx_path = hf_hub_download(
74
+ repo_id="Yuliang/SMPLX",
75
+ use_auth_token=os.environ["ICON"],
76
+ filename="smpl_data/smplx_to_smpl.pkl"
77
+ )
78
 
79
+ self.smplx_eyeball_fid_path = hf_hub_download(
80
+ repo_id="Yuliang/SMPLX",
81
+ use_auth_token=os.environ["ICON"],
82
+ filename="smpl_data/eyeball_fid.npy"
83
+ )
84
+ self.smplx_fill_mouth_fid_path = hf_hub_download(
85
+ repo_id="Yuliang/SMPLX",
86
+ use_auth_token=os.environ["ICON"],
87
+ filename="smpl_data/fill_mouth_fid.npy"
88
+ )
89
+ self.smplx_flame_vid_path = hf_hub_download(
90
+ repo_id="Yuliang/SMPLX",
91
+ use_auth_token=os.environ["ICON"],
92
+ filename="smpl_data/FLAME_SMPLX_vertex_ids.npy"
93
+ )
94
+ self.smplx_mano_vid_path = hf_hub_download(
95
+ repo_id="Yuliang/SMPLX",
96
+ use_auth_token=os.environ["ICON"],
97
+ filename="smpl_data/MANO_SMPLX_vertex_ids.pkl"
98
  )
 
99
  self.smpl_vert_seg_path = osp.join(
100
  osp.dirname(__file__), "../../lib/common/smpl_vert_segmentation.json"
101
  )
102
+ self.front_flame_path = hf_hub_download(
103
+ repo_id="Yuliang/SMPLX",
104
+ use_auth_token=os.environ["ICON"],
105
+ filename="smpl_data/FLAME_face_mask_ids.npy"
106
+ )
107
+ self.smplx_vertex_lmkid_path = hf_hub_download(
108
+ repo_id="Yuliang/SMPLX",
109
+ use_auth_token=os.environ["ICON"],
110
+ filename="smpl_data/smplx_vertex_lmkid.npy"
111
  )
112
 
113
  self.smplx_faces = np.load(self.smplx_faces_path)
 
149
 
150
  self.smplx_to_smpl = cPickle.load(open(self.smplx_to_smplx_path, "rb"))
151
 
 
 
152
  self.ghum_smpl_pairs = torch.tensor([(0, 24), (2, 26), (5, 25), (7, 28), (8, 27), (11, 16),
153
  (12, 17), (13, 18), (14, 19), (15, 20), (16, 21),
154
  (17, 39), (18, 44), (19, 36), (20, 41), (21, 35),
 
192
  model_init_params = dict(
193
  gender="male",
194
  model_type="smplx",
195
+ model_path="Yuliang/SMPLX",
196
  create_global_orient=False,
197
  create_body_pose=False,
198
  create_betas=False,
lib/pixielib/utils/config.py CHANGED
@@ -6,6 +6,7 @@ import os
6
 
7
  import yaml
8
  from yacs.config import CfgNode as CN
 
9
 
10
  cfg = CN()
11
 
@@ -13,7 +14,9 @@ abs_pixie_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", ".
13
  cfg.pixie_dir = abs_pixie_dir
14
  cfg.device = "cuda"
15
  cfg.device_id = "0"
16
- cfg.pretrained_modelpath = os.path.join(cfg.pixie_dir, "data/HPS/pixie_data", "pixie_model.tar")
 
 
17
  # smplx parameter settings
18
  cfg.params = CN()
19
  cfg.params.body_list = ["body_cam", "global_pose", "partbody_pose", "neck_pose"]
@@ -29,38 +32,47 @@ cfg.params.hand_share_list = [
29
  # Options for Body model
30
  # ---------------------------------------------------------------------------- #
31
  cfg.model = CN()
32
- cfg.model.topology_path = os.path.join(
33
- cfg.pixie_dir, "data/HPS/pixie_data", "SMPL_X_template_FLAME_uv.obj"
 
 
 
 
 
 
 
 
34
  )
35
- cfg.model.topology_smplxtex_path = os.path.join(
36
- cfg.pixie_dir, "data/HPS/pixie_data", "smplx_tex.obj"
37
  )
38
- cfg.model.topology_smplx_hand_path = os.path.join(
39
- cfg.pixie_dir, "data/HPS/pixie_data", "smplx_hand.obj"
40
  )
41
- cfg.model.smplx_model_path = os.path.join(
42
- cfg.pixie_dir, "data/HPS/pixie_data", "SMPLX_NEUTRAL_2020.npz"
43
  )
44
- cfg.model.face_mask_path = os.path.join(cfg.pixie_dir, "data/HPS/pixie_data", "uv_face_mask.png")
45
- cfg.model.face_eye_mask_path = os.path.join(
46
- cfg.pixie_dir, "data/HPS/pixie_data", "uv_face_eye_mask.png"
47
  )
48
- cfg.model.tex_path = os.path.join(cfg.pixie_dir, "data/HPS/pixie_data", "FLAME_albedo_from_BFM.npz")
49
- cfg.model.extra_joint_path = os.path.join(
50
- cfg.pixie_dir, "data/HPS/pixie_data", "smplx_extra_joints.yaml"
51
  )
52
- cfg.model.j14_regressor_path = os.path.join(
53
- cfg.pixie_dir, "data/HPS/pixie_data", "SMPLX_to_J14.pkl"
54
  )
55
- cfg.model.flame2smplx_cached_path = os.path.join(
56
- cfg.pixie_dir, "data/HPS/pixie_data", "flame2smplx_tex_1024.npy"
57
  )
58
- cfg.model.smplx_tex_path = os.path.join(cfg.pixie_dir, "data/HPS/pixie_data", "smplx_tex.png")
59
- cfg.model.mano_ids_path = os.path.join(
60
- cfg.pixie_dir, "data/HPS/pixie_data", "MANO_SMPLX_vertex_ids.pkl"
 
61
  )
62
- cfg.model.flame_ids_path = os.path.join(
63
- cfg.pixie_dir, "data/HPS/pixie_data", "SMPL-X__FLAME_vertex_ids.npy"
 
 
64
  )
65
  cfg.model.uv_size = 256
66
  cfg.model.n_shape = 200
 
6
 
7
  import yaml
8
  from yacs.config import CfgNode as CN
9
+ from huggingface_hub import hf_hub_download
10
 
11
  cfg = CN()
12
 
 
14
  cfg.pixie_dir = abs_pixie_dir
15
  cfg.device = "cuda"
16
  cfg.device_id = "0"
17
+ cfg.pretrained_modelpath = hf_hub_download(
18
+ repo_id="Yuliang/PIXIE", filename="pixie_model.tar", use_auth_token=os.environ["ICON"]
19
+ )
20
  # smplx parameter settings
21
  cfg.params = CN()
22
  cfg.params.body_list = ["body_cam", "global_pose", "partbody_pose", "neck_pose"]
 
32
  # Options for Body model
33
  # ---------------------------------------------------------------------------- #
34
  cfg.model = CN()
35
+ cfg.model.topology_path = hf_hub_download(
36
+ repo_id="Yuliang/PIXIE",
37
+ use_auth_token=os.environ["ICON"],
38
+ filename="SMPL_X_template_FLAME_uv.obj"
39
+ )
40
+ cfg.model.topology_smplxtex_path = hf_hub_download(
41
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="smplx_tex.obj"
42
+ )
43
+ cfg.model.topology_smplx_hand_path = hf_hub_download(
44
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="smplx_hand.obj"
45
  )
46
+ cfg.model.smplx_model_path = hf_hub_download(
47
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="SMPLX_NEUTRAL_2020.npz"
48
  )
49
+ cfg.model.face_mask_path = hf_hub_download(
50
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="uv_face_mask.png"
51
  )
52
+ cfg.model.face_eye_mask_path = hf_hub_download(
53
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="uv_face_eye_mask.png"
54
  )
55
+ cfg.model.extra_joint_path = hf_hub_download(
56
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="smplx_extra_joints.yaml"
 
57
  )
58
+ cfg.model.j14_regressor_path = hf_hub_download(
59
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="SMPLX_to_J14.pkl"
 
60
  )
61
+ cfg.model.flame2smplx_cached_path = hf_hub_download(
62
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="flame2smplx_tex_1024.npy"
63
  )
64
+ cfg.model.smplx_tex_path = hf_hub_download(
65
+ repo_id="Yuliang/PIXIE", use_auth_token=os.environ["ICON"], filename="smplx_tex.png"
66
  )
67
+ cfg.model.mano_ids_path = hf_hub_download(
68
+ repo_id="Yuliang/PIXIE",
69
+ use_auth_token=os.environ["ICON"],
70
+ filename="MANO_SMPLX_vertex_ids.pkl"
71
  )
72
+ cfg.model.flame_ids_path = hf_hub_download(
73
+ repo_id="Yuliang/PIXIE",
74
+ use_auth_token=os.environ["ICON"],
75
+ filename="SMPL-X__FLAME_vertex_ids.npy"
76
  )
77
  cfg.model.uv_size = 256
78
  cfg.model.n_shape = 200
lib/smplx/body_models.py CHANGED
@@ -1015,12 +1015,12 @@ class SMPLX(SMPLH):
1015
  """
1016
 
1017
  # Load the model
1018
- if osp.isdir(model_path):
1019
- model_fn = "SMPLX_{}.{ext}".format(gender.upper(), ext=ext)
1020
- smplx_path = os.path.join(model_path, model_fn)
1021
- else:
1022
- smplx_path = model_path
1023
- assert osp.exists(smplx_path), "Path {} does not exist!".format(smplx_path)
1024
 
1025
  if ext == "pkl":
1026
  with open(smplx_path, "rb") as smplx_file:
 
1015
  """
1016
 
1017
  # Load the model
1018
+ from huggingface_hub import hf_hub_download
1019
+
1020
+ model_fn = "SMPLX_{}.{ext}".format(gender.upper(), ext=ext)
1021
+ smplx_path = hf_hub_download(
1022
+ repo_id=model_path, use_auth_token=os.environ["ICON"], filename=f"models/{model_fn}"
1023
+ )
1024
 
1025
  if ext == "pkl":
1026
  with open(smplx_path, "rb") as smplx_file:
loose.txt DELETED
@@ -1,100 +0,0 @@
1
- renderpeople/rp_yasmin_posed_007
2
- renderpeople/rp_victoria_posed_006
3
- renderpeople/rp_tilda_posed_005
4
- renderpeople/rp_tiffany_posed_015
5
- renderpeople/rp_tanja_posed_018
6
- renderpeople/rp_stephanie_posed_010
7
- renderpeople/rp_stacy_posed_002
8
- renderpeople/rp_serena_posed_027
9
- renderpeople/rp_serena_posed_024
10
- renderpeople/rp_seiko_posed_031
11
- renderpeople/rp_seiko_posed_015
12
- renderpeople/rp_saki_posed_033
13
- renderpeople/rp_rosy_posed_014
14
- renderpeople/rp_rosy_posed_001
15
- renderpeople/rp_roberta_posed_022
16
- renderpeople/rp_rick_posed_016
17
- renderpeople/rp_ray_posed_007
18
- renderpeople/rp_ramon_posed_002
19
- renderpeople/rp_ralph_posed_013
20
- renderpeople/rp_philip_posed_030
21
- renderpeople/rp_petra_posed_008
22
- renderpeople/rp_olivia_posed_014
23
- renderpeople/rp_olivia_posed_007
24
- renderpeople/rp_naomi_posed_034
25
- renderpeople/rp_naomi_posed_030
26
- renderpeople/rp_martha_posed_002
27
- renderpeople/rp_martha_posed_001
28
- renderpeople/rp_marleen_posed_002
29
- renderpeople/rp_lina_posed_004
30
- renderpeople/rp_kylie_posed_017
31
- renderpeople/rp_kylie_posed_006
32
- renderpeople/rp_kylie_posed_003
33
- renderpeople/rp_kent_posed_005
34
- renderpeople/rp_kent_posed_002
35
- renderpeople/rp_julia_posed_022
36
- renderpeople/rp_julia_posed_014
37
- renderpeople/rp_judy_posed_002
38
- renderpeople/rp_jessica_posed_058
39
- renderpeople/rp_jessica_posed_022
40
- renderpeople/rp_jennifer_posed_003
41
- renderpeople/rp_janna_posed_046
42
- renderpeople/rp_janna_posed_043
43
- renderpeople/rp_janna_posed_034
44
- renderpeople/rp_janna_posed_019
45
- renderpeople/rp_janett_posed_016
46
- renderpeople/rp_jamal_posed_012
47
- renderpeople/rp_helen_posed_037
48
- renderpeople/rp_fiona_posed_002
49
- renderpeople/rp_felice_posed_005
50
- renderpeople/rp_felice_posed_004
51
- renderpeople/rp_eve_posed_003
52
- renderpeople/rp_eve_posed_002
53
- renderpeople/rp_eve_posed_001
54
- renderpeople/rp_eric_posed_048
55
- renderpeople/rp_emma_posed_029
56
- renderpeople/rp_ellie_posed_015
57
- renderpeople/rp_ellie_posed_014
58
- renderpeople/rp_debra_posed_016
59
- renderpeople/rp_debra_posed_014
60
- renderpeople/rp_debra_posed_004
61
- renderpeople/rp_corey_posed_020
62
- renderpeople/rp_corey_posed_009
63
- renderpeople/rp_corey_posed_004
64
- renderpeople/rp_cody_posed_016
65
- renderpeople/rp_claudia_posed_034
66
- renderpeople/rp_claudia_posed_033
67
- renderpeople/rp_claudia_posed_024
68
- renderpeople/rp_claudia_posed_025
69
- renderpeople/rp_cindy_posed_020
70
- renderpeople/rp_christine_posed_023
71
- renderpeople/rp_christine_posed_022
72
- renderpeople/rp_christine_posed_020
73
- renderpeople/rp_christine_posed_010
74
- renderpeople/rp_carla_posed_016
75
- renderpeople/rp_caren_posed_009
76
- renderpeople/rp_caren_posed_008
77
- renderpeople/rp_brandon_posed_006
78
- renderpeople/rp_belle_posed_001
79
- renderpeople/rp_beatrice_posed_025
80
- renderpeople/rp_beatrice_posed_024
81
- renderpeople/rp_beatrice_posed_023
82
- renderpeople/rp_beatrice_posed_021
83
- renderpeople/rp_beatrice_posed_019
84
- renderpeople/rp_beatrice_posed_017
85
- renderpeople/rp_anna_posed_008
86
- renderpeople/rp_anna_posed_007
87
- renderpeople/rp_anna_posed_006
88
- renderpeople/rp_anna_posed_003
89
- renderpeople/rp_anna_posed_001
90
- renderpeople/rp_alvin_posed_016
91
- renderpeople/rp_alison_posed_028
92
- renderpeople/rp_alison_posed_024
93
- renderpeople/rp_alison_posed_017
94
- renderpeople/rp_alexandra_posed_022
95
- renderpeople/rp_alexandra_posed_023
96
- renderpeople/rp_alexandra_posed_019
97
- renderpeople/rp_alexandra_posed_018
98
- renderpeople/rp_alexandra_posed_013
99
- renderpeople/rp_alexandra_posed_012
100
- renderpeople/rp_alexandra_posed_011
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
packages.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ libgl1
2
+ freeglut3-dev
3
+ unzip
4
+ ffmpeg
5
+ libsm6
6
+ libxext6
7
+ libgl1-mesa-dri
8
+ libegl1-mesa
9
+ libgbm1
10
+ build-essential
11
+ python-wheel
12
+ libturbojpeg
13
+ libeigen3-dev
pose.txt DELETED
@@ -1,100 +0,0 @@
1
- cape/00215-jerseyshort-pose_model-000200
2
- cape/00134-longlong-ballet4_trial2-000250
3
- cape/00134-longlong-badminton_trial1-000230
4
- cape/00134-longlong-frisbee_trial1-000190
5
- cape/03375-shortlong-ballet1_trial1-000210
6
- cape/03375-longlong-babysit_trial2-000110
7
- cape/00134-shortlong-stretch_trial1-000310
8
- cape/03375-shortshort-lean_trial1-000060
9
- cape/03375-shortshort-swim_trial2-000110
10
- cape/03375-longlong-box_trial1-000190
11
- cape/03375-longlong-row_trial2-000150
12
- cape/00134-shortlong-hockey_trial1-000140
13
- cape/00134-shortlong-hockey_trial1-000090
14
- cape/00134-longlong-ski_trial2-000200
15
- cape/00134-longlong-stretch_trial1-000450
16
- cape/00096-shirtshort-soccer-000160
17
- cape/03375-shortshort-hands_up_trial2-000270
18
- cape/03375-shortshort-ballet1_trial1-000110
19
- cape/03375-longlong-babysit_trial2-000150
20
- cape/03375-shortshort-fashion_trial1-000140
21
- cape/00134-shortlong-ballet2_trial1-000110
22
- cape/00134-longlong-ballet2_trial1-000120
23
- cape/00134-shortlong-ballet2_trial1-000120
24
- cape/00134-shortlong-ballet2_trial1-000090
25
- cape/00134-longlong-ballet2_trial2-000110
26
- cape/00134-longlong-volleyball_trial2-000050
27
- cape/00134-longlong-stretch_trial1-000500
28
- cape/00134-longlong-housework_trial1-000380
29
- cape/00134-shortlong-dig_trial1-000150
30
- cape/03375-longlong-catchpick_trial1-000110
31
- cape/03375-shortlong-ballet1_trial1-000250
32
- cape/03375-shortlong-shoulders_trial1-000360
33
- cape/03375-shortlong-slack_trial2-000070
34
- cape/03375-shortlong-shoulders_trial1-000220
35
- cape/03375-shortlong-stretch_trial1-000330
36
- cape/00127-shortlong-ballerina_spin-000080
37
- cape/00127-shortlong-ballerina_spin-000200
38
- cape/00096-shortshort-basketball-000100
39
- cape/00096-shortshort-ballerina_spin-000160
40
- cape/00134-longlong-stretch_trial2-000440
41
- cape/02474-longlong-ATUsquat-000100
42
- cape/03375-longlong-ATUsquat_trial1-000120
43
- cape/02474-longlong-ATUsquat-000110
44
- cape/00134-longlong-ballet1_trial1-000180
45
- cape/00096-shirtlong-ATUsquat-000130
46
- cape/00032-shortshort-pose_model-000030
47
- cape/00134-shortlong-athletics_trial2-000070
48
- cape/00032-longshort-pose_model-000060
49
- cape/00032-shortshort-shoulders_mill-000060
50
- cape/00127-shortlong-pose_model-000430
51
- cape/00122-shortshort-ATUsquat-000120
52
- cape/00032-shortshort-bend_back_and_front-000220
53
- cape/00096-shortshort-squats-000180
54
- cape/00032-shortlong-squats-000090
55
- cape/03375-shortlong-ATUsquat_trial2-000080
56
- cape/03375-shortshort-lean_trial1-000130
57
- cape/03375-blazerlong-music_trial1-000150
58
- cape/03284-longlong-hips-000170
59
- cape/03375-shortlong-shoulders_trial1-000370
60
- cape/03375-shortlong-ballet1_trial1-000290
61
- cape/00215-jerseyshort-shoulders_mill-000320
62
- cape/00215-poloshort-soccer-000110
63
- cape/00122-shortshort-punching-000170
64
- cape/00096-jerseyshort-shoulders_mill-000140
65
- cape/00032-longshort-flying_eagle-000240
66
- cape/00134-shortlong-swim_trial1-000160
67
- cape/03375-shortshort-music_trial1-000120
68
- cape/03375-shortshort-handball_trial1-000120
69
- cape/00215-longshort-punching-000060
70
- cape/00134-shortlong-swim_trial2-000120
71
- cape/03375-shortshort-hands_up_trial1-000140
72
- cape/03375-shortshort-hands_up_trial1-000270
73
- cape/03375-shortshort-volleyball_trial1-000110
74
- cape/03375-shortshort-swim_trial1-000270
75
- cape/03375-longlong-row_trial2-000190
76
- cape/00215-poloshort-flying_eagle-000120
77
- cape/03223-shortshort-flying_eagle-000280
78
- cape/00096-shirtlong-shoulders_mill-000110
79
- cape/00096-shirtshort-pose_model-000190
80
- cape/03375-shortshort-swim_trial1-000190
81
- cape/03375-shortlong-music_trial2-000040
82
- cape/03375-shortlong-babysit_trial2-000070
83
- cape/00215-jerseyshort-flying_eagle-000110
84
- cape/03375-blazerlong-music_trial1-000030
85
- cape/03375-longlong-volleyball_trial2-000230
86
- cape/03375-blazerlong-lean_trial2-000110
87
- cape/03375-longlong-box_trial2-000110
88
- cape/03375-longlong-drinkeat_trial2-000050
89
- cape/00134-shortlong-slack_trial1-000150
90
- cape/03375-shortshort-climb_trial1-000170
91
- cape/00032-longshort-tilt_twist_left-000060
92
- cape/00215-longshort-chicken_wings-000060
93
- cape/00215-poloshort-bend_back_and_front-000130
94
- cape/03223-longshort-flying_eagle-000480
95
- cape/00215-longshort-bend_back_and_front-000100
96
- cape/00215-longshort-tilt_twist_left-000130
97
- cape/00096-longshort-tilt_twist_left-000150
98
- cape/03284-longshort-twist_tilt_left-000080
99
- cape/03223-shortshort-flying_eagle-000270
100
- cape/02474-longshort-improvise-000080
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -1,3 +1,11 @@
 
 
 
 
 
 
 
 
1
  matplotlib
2
  scikit-image
3
  trimesh
@@ -15,4 +23,10 @@ einops
15
  boto3
16
  open3d
17
  xatlas
 
 
 
 
18
  git+https://github.com/YuliangXiu/rembg.git
 
 
 
1
+ --extra-index-url https://download.pytorch.org/whl/cu116
2
+ torch==1.13.1+cu116
3
+ torchvision==0.14.1+cu116
4
+ fvcore
5
+ iopath
6
+ pyembree
7
+ cupy
8
+ cython
9
  matplotlib
10
  scikit-image
11
  trimesh
 
23
  boto3
24
  open3d
25
  xatlas
26
+ transformers
27
+ controlnet_aux
28
+ xformers==0.0.16
29
+ triton
30
  git+https://github.com/YuliangXiu/rembg.git
31
+ git+https://github.com/huggingface/diffusers.git
32
+ git+https://github.com/huggingface/accelerate.git