pengsida
initial commit
1ba539f

Run the code on the custom dataset

Please inform me if there is any problem to run the code on your own data.

  1. If your data already have SMPL parameters, just export the SMPL parameters and SMPL vertices to two directories params and vertices. If you do not have SMPL parameters, you could take the following ways:

  2. Organize the dataset as the following structure. Please refer to CoreView_392 of ZJU-MoCap dataset as an example.
    The annots.npy is generated by get_annots.py. This code is used here to show the format of annots.npy. Please revise it according to your input camera parameters and image paths.
    Example camera files can be found in camera_params.

    file

    β”œβ”€β”€ /path/to/dataset
    β”‚   β”œβ”€β”€ annots.npy  // Store the camera parameters and image paths.
    β”‚   β”œβ”€β”€ params
    β”‚   β”‚   β”œβ”€β”€ 0.npy
    β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”‚   β”œβ”€β”€ 1234.npy
    β”‚   β”œβ”€β”€ vertices
    β”‚   β”‚   β”œβ”€β”€ 0.npy
    β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”‚   β”œβ”€β”€ 1234.npy
    β”‚   β”œβ”€β”€ Camera_B1  // Store the images. No restrictions on the directory name.
    β”‚   β”‚   β”œβ”€β”€ 00000.jpg
    β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ Camera_B2
    β”‚   β”‚   β”œβ”€β”€ 00000.jpg
    β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ Camera_B23
    β”‚   β”‚   β”œβ”€β”€ 00000.jpg
    β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ mask_cihp  // Store the foreground segmentation. The directory name must be "mask_cihp".
    β”‚   β”‚   β”œβ”€β”€ Camera_B1
    β”‚   β”‚   β”‚   β”œβ”€β”€ 00000.png
    β”‚   β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”‚   β”œβ”€β”€ Camera_B2
    β”‚   β”‚   β”‚   β”œβ”€β”€ 00000.png
    β”‚   β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”‚   β”œβ”€β”€ Camera_B23
    β”‚   β”‚   β”‚   β”œβ”€β”€ 00000.png
    β”‚   β”‚   β”‚   β”œβ”€β”€ ...
    
  3. Use configs/multi_view_custom.yaml or configs/monocular_custom.yaml for training. Note that you need to revise the train_dataset and test_dataset in the yaml file.

    # train from scratch
    python train_net.py --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> resume False
    # resume training
    python train_net.py --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> resume True
    

    Revise the num_train_frame and training_view in custom.yaml according to your data. Or you could specify it in the command line:

    python train_net.py --cfg_file configs/custom.yaml exp_name <exp_name> num_train_frame 1000 training_view "0, 1, 2, 3" resume False
    
  4. Visualization. Please refer to Visualization on ZJU-MoCap as an example.

    • Visualize novel views of single frame.
    python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_novel_view True num_render_views 100
    
    • Visualize views of dynamic humans with fixed camera
    python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_novel_pose True num_render_frame 1000 num_render_views 1
    
    • Visualize views of dynamic humans with rotated camera
    python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_novel_pose True num_render_frame 1000
    
    • Visualize mesh. mesh_th is the iso-surface threshold of Marching Cube Algorithm.
    # generate meshes
    python run.py --type visualize --cfg_file configs/multi_view_custom.yaml exp_name <exp_name> vis_mesh True mesh_th 10
    # visualize a specific mesh
    python tools/render_mesh.py --exp_name <exp_name> --dataset zju_mocap --mesh_ind 0