SAT-HMR

Offical Pytorch implementation of our paper:

SAT-HMR: Real-Time Multi-Person 3D Mesh Estimation via Scale-Adaptive Tokens

Chi Su , Xiaoxuan Ma , Jiajun Su , Yizhou Wang

Paper | Project Page | Video | GitHub

Installation

We tested with python 3.11, PyTorch 2.4.1 and CUDA 12.1.

  1. Create a conda environment.
conda create -n sathmr python=3.11 -y
conda activate sathmr
  1. Install PyTorch and xFormers.
# Install PyTorch. It is recommended that you follow [official instruction](https://pytorch.org/) and adapt the cuda version to yours.
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.1 -c pytorch -c nvidia

# Install xFormers. It is recommended that you follow [official instruction](https://github.com/facebookresearch/xformers) and adapt the cuda version to yours.
pip install -U xformers==0.0.28.post1  --index-url https://download.pytorch.org/whl/cu121
  1. Install other dependencies.
pip install -r requirements.txt
  1. You may need to modify chumpy package to avoid errors. For detailed instructions, please check this guidance.

Download Models & Weights

  1. Download SMPL-related weights.

    • Download basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl, and basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here (female & male) and here (neutral) to ${Project}/weights/smpl_data/smpl. Please rename them as SMPL_FEMALE.pkl, SMPL_MALE.pkl, and SMPL_NEUTRAL.pkl, respectively.
    • Download others from Google drive and put them to ${Project}/weights/smpl_data/smpl.
  2. Download DINOv2 pretrained weights from their official repository. We use ViT-B/14 distilled (without registers). Please put dinov2_vitb14_pretrain.pth to ${Project}/weights/dinov2. These weights will be used to initialize our encoder. You can skip this step if you are not going to train SAT-HMR.

  3. Download pretrained weights for inference and evaluation from Google drive or 🤗HuggingFace. Please put them to ${Project}/weights/sat_hmr.

Now the weights directory structure should be like this.

${Project}
|-- weights
    |-- dinov2
    |   `-- dinov2_vitb14_pretrain.pth
    |-- sat_hmt
    |   `-- sat_644.pth
    `-- smpl_data
        `-- smpl
            |-- body_verts_smpl.npy
            |-- J_regressor_h36m_correct.npy
            |-- SMPL_FEMALE.pkl
            |-- SMPL_MALE.pkl
            |-- smpl_mean_params.npz
            `-- SMPL_NEUTRAL.pkl

Inference on Images

Inference with 1 GPU

We provide some demo images in ${Project}/demo. You can run SAT-HMR on all images on a single GPU via:

python main.py --mode infer --cfg demo

Results with overlayed meshes will be saved in ${Project}/demo_results.

You can specify your own inference configuration by modifing ${Project}/configs/run/demo.yaml:

  • input_dir specifies the input image folder.
  • output_dir specifies the output folder.
  • conf_thresh specifies a list of confidence thresholds used for detection. SAT-HMR will run inference using thresholds in the list, respectively.
  • infer_batch_size specifies the batch size used for inference (on a single GPU).

Inference with Multiple GPUs

You can also try distributed inference on multiple GPUs if your input folder contains a large number of images. Since we use 🤗 Accelerate to launch our distributed configuration, first you may need to configure 🤗 Accelerate for how the current system is setup for distributed process. To do so run the following command and answer the questions prompted to you:

accelerate config

Then run:

accelerate launch main.py --mode infer --cfg demo

Citing

If you find this code useful for your research, please consider citing our paper:

@article{su2024sathmr,
      title={SAT-HMR: Real-Time Multi-Person 3D Mesh Estimation via Scale-Adaptive Tokens},
      author={Su, Chi and Ma, Xiaoxuan and Su, Jiajun and Wang, Yizhou},
      journal={arXiv preprint arXiv:2411.19824},
      year={2024}
    }

Acknowledgement

This repo is built on the excellent work DINOv2, DAB-DETR, DINO and 🤗 Accelerate. Thanks for these great projects.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.