Video-3D-LLM_data / README.md
zd11024's picture
Update README.md
637c86c verified

Data Preparation

Preprocessing

The directory should be orgainized as

Video-3D-LLM # project root
├── data
│   ├── scannet
│   │   ├── scans
│   │   ├── posed_images
│   │   ├── pcd_with_object_aabbs
│   │   └── mask
│   ├── embodiedscan
│   │   ├── embodiedscan_infos_train.pkl
│   │   ├── embodiedscan_infos_val.pkl
│   │   └── embodiedscan_infos_test.pkl
│   ├── metadata
│   │   ├── scannet_select_frames.json
│   │   ├── pcd_discrete_0.1.pkl
│   │   ├── scannet_train_gt_box.json
│   │   └── scannet_val_pred_box.json
│   ├── prcoessed
│   │   ├── multi3drefer_train_llava_style.json
│   │   ├── multi3drefer_val_llava_style.json
│   │   ├── ...

ScanNet v2

  1. Download the ScanNet v2 dataset here. The folder of ScanNet should look like
Video-3D-LLM # project root
├── data
│   ├── scannet
│   │   ├── scans
│   │   │   ├── [scene_id]
│   │   │   │   ├── [scene_id]_vh_clean_2.ply
│   │   │   │   ├── [scene_id]_vh_clean_2.0.010000.segs.json
│   │   │   │   ├── [scene_id].aggregation.json
│   │   │   │   ├── [scene_id].txt
│   │   │   │   └── [scene_id].sens
  1. Extract color images, depth images and camera parameter using the following script, which is modified from EmbodiedScan.
python scripts/3d/preprocessing/generate_image_scannet.py --fast
  1. Extract point clouds for each scene.
python scripts/3d/preprocessing/extract_scannet_pcd.py

This will generate the point clouds and object bounding boxes for each scan.

EmbodiedScan

Download EmbodiedScan data at this link. You need to fill out the official form to get the access to the dataset. Decompress the embodiedscan and the directory should be orgainized as

├── data
│   ├── metadata
│   │   ├── embodiedscan
│   │   │   ├── embodiedscan_infos_train.pkl
│   │   │   ├── embodiedscan_infos_val.pkl
│   │   │   └── embodiedscan_infos_test.pkl

Meta Information

  1. Prepare the object proposals. For training set, we directly use the ground truth via the following command.
python scripts/3d/preprocessing/extract_gt_box.py

For the validation set, we utilize the object proposals detected by Mask3D. LEO provided the corresponding annotation results here. We place it at data/scannet/mask and process it using the following script.

python scripts/3d/preprocessing/extract_pred_box.py
  1. Prepare the maximum coverage sampling. Firstly we need to preprocess the voxel for each scan for maximum coverage sampling. The results will be saved at data/metadata/pcd_discrete_0.1.pkl.
python scripts/3d/preprocessing/convert_pcd_to_voxel.py

And then we perform the maximum coverage sampling offiline, and the results will be saved at data/metadata/scannet_select_frames.json.

python scripts/3d/preprocessing/max_coverage_sampling.py

Downstream Benchmarks

  1. SQA3D: Download the SQA3D and convert the annotation to the LLaVA format using the following script.
python scripts/3d/preprocessing/process_sqa3d.py
  1. ScanQA: Download the ScanQA and convert the annotation using the following script.
python scripts/3d/preprocessing/process_scanqa.py
  1. ScanRefer: Download the ScanRefer, and then run the following command.
python scripts/3d/preprocessing/process_scanrefer.py
  1. Scan2Cap: Convert the annotation of ScanRefer to Scan2Cap.
python scripts/3d/preprocessing/process_scan2cap.py
  1. Multi3DRefer: Download the Multi3DRefer.
python scripts/3d/preprocessing/process_multi3drefer.py