Spaces:
Configuration error
Run the code on the custom dataset
Please inform me if there is any problem to run the code on your own data.
If your data already have SMPL parameters, just export the SMPL parameters and SMPL vertices to two directories
params
andvertices
. If you do not have SMPL parameters, you could take the following ways:- For a multi-view video, you could estimate SMPL parameters using https://github.com/zju3dv/EasyMocap. The output parameter files can be processed using the script.
- For a monocular video, you could estimate SMPL parameters using https://github.com/thmoa/videoavatars. The output
reconstructed_poses.hdf5
file can be processed following the instruction.
Organize the dataset as the following structure. Please refer to
CoreView_392
of ZJU-MoCap dataset as an example.
Theannots.npy
is generated by get_annots.py. This code is used here to show the format ofannots.npy
. Please revise it according to your input camera parameters and image paths.
Example camera files can be found in camera_params.βββ /path/to/dataset β βββ annots.npy // Store the camera parameters and image paths. β βββ params β β βββ 0.npy β β βββ ... β β βββ 1234.npy β βββ vertices β β βββ 0.npy β β βββ ... β β βββ 1234.npy β βββ Camera_B1 // Store the images. No restrictions on the directory name. β β βββ 00000.jpg β β βββ ... β βββ Camera_B2 β β βββ 00000.jpg β β βββ ... β βββ ... β βββ Camera_B23 β β βββ 00000.jpg β β βββ ... β βββ mask_cihp // Store the foreground segmentation. The directory name must be "mask_cihp". β β βββ Camera_B1 β β β βββ 00000.png β β β βββ ... β β βββ Camera_B2 β β β βββ 00000.png β β β βββ ... β β βββ ... β β βββ Camera_B23 β β β βββ 00000.png β β β βββ ...
Use
configs/multi_view_custom.yaml
orconfigs/monocular_custom.yaml
for training. Note that you need to revise thetrain_dataset
andtest_dataset
in the yaml file.# train from scratch python train_net.py --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> resume False # resume training python train_net.py --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> resume True
Revise the
num_train_frame
andtraining_view
incustom.yaml
according to your data. Or you could specify it in the command line:python train_net.py --cfg_file configs/custom.yaml exp_name <exp_name> num_train_frame 1000 training_view "0, 1, 2, 3" resume False
Visualization. Please refer to Visualization on ZJU-MoCap as an example.
- Visualize novel views of single frame.
python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_novel_view True num_render_views 100
- Visualize views of dynamic humans with fixed camera
python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_novel_pose True num_render_frame 1000 num_render_views 1
- Visualize views of dynamic humans with rotated camera
python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_novel_pose True num_render_frame 1000
- Visualize mesh.
mesh_th
is the iso-surface threshold of Marching Cube Algorithm.
# generate meshes python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_mesh True mesh_th 10 # visualize a specific mesh python tools/render_mesh.py --exp_name <exp_name> --dataset zju_mocap --mesh_ind 0