The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Data Preparation
Preprocessing
The directory should be orgainized as
Video-3D-LLM # project root
├── data
│ ├── scannet
│ │ ├── scans
│ │ ├── posed_images
│ │ ├── pcd_with_object_aabbs
│ │ └── mask
│ ├── embodiedscan
│ │ ├── embodiedscan_infos_train.pkl
│ │ ├── embodiedscan_infos_val.pkl
│ │ └── embodiedscan_infos_test.pkl
│ ├── metadata
│ │ ├── scannet_select_frames.json
│ │ ├── pcd_discrete_0.1.pkl
│ │ ├── scannet_train_gt_box.json
│ │ └── scannet_val_pred_box.json
│ ├── prcoessed
│ │ ├── multi3drefer_train_llava_style.json
│ │ ├── multi3drefer_val_llava_style.json
│ │ ├── ...
ScanNet v2
- Download the ScanNet v2 dataset here. The folder of ScanNet should look like
Video-3D-LLM # project root
├── data
│ ├── scannet
│ │ ├── scans
│ │ │ ├── [scene_id]
│ │ │ │ ├── [scene_id]_vh_clean_2.ply
│ │ │ │ ├── [scene_id]_vh_clean_2.0.010000.segs.json
│ │ │ │ ├── [scene_id].aggregation.json
│ │ │ │ ├── [scene_id].txt
│ │ │ │ └── [scene_id].sens
- Extract color images, depth images and camera parameter using the following script, which is modified from EmbodiedScan.
python scripts/3d/preprocessing/generate_image_scannet.py --fast
- Extract point clouds for each scene.
python scripts/3d/preprocessing/extract_scannet_pcd.py
This will generate the point clouds and object bounding boxes for each scan.
EmbodiedScan
Download EmbodiedScan data at this link. You need to fill out the official form to get the access to the dataset. Decompress the embodiedscan and the directory should be orgainized as
├── data
│ ├── metadata
│ │ ├── embodiedscan
│ │ │ ├── embodiedscan_infos_train.pkl
│ │ │ ├── embodiedscan_infos_val.pkl
│ │ │ └── embodiedscan_infos_test.pkl
Meta Information
- Prepare the object proposals. For training set, we directly use the ground truth via the following command.
python scripts/3d/preprocessing/extract_gt_box.py
For the validation set, we utilize the object proposals detected by Mask3D. LEO provided the corresponding annotation results here. We place it at data/scannet/mask
and process it using the following script.
python scripts/3d/preprocessing/extract_pred_box.py
- Prepare the maximum coverage sampling. Firstly we need to preprocess the voxel for each scan for maximum coverage sampling. The results will be saved at
data/metadata/pcd_discrete_0.1.pkl
.
python scripts/3d/preprocessing/convert_pcd_to_voxel.py
And then we perform the maximum coverage sampling offiline, and the results will be saved at data/metadata/scannet_select_frames.json
.
python scripts/3d/preprocessing/max_coverage_sampling.py
Downstream Benchmarks
- SQA3D: Download the SQA3D and convert the annotation to the LLaVA format using the following script.
python scripts/3d/preprocessing/process_sqa3d.py
- ScanQA: Download the ScanQA and convert the annotation using the following script.
python scripts/3d/preprocessing/process_scanqa.py
- ScanRefer: Download the ScanRefer, and then run the following command.
python scripts/3d/preprocessing/process_scanrefer.py
- Scan2Cap: Convert the annotation of ScanRefer to Scan2Cap.
python scripts/3d/preprocessing/process_scan2cap.py
- Multi3DRefer: Download the Multi3DRefer.
python scripts/3d/preprocessing/process_multi3drefer.py
- Downloads last month
- 10