Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,108 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Data Preparation
|
2 |
+
|
3 |
+
## Preprocessing
|
4 |
+
The directory should be orgainized as
|
5 |
+
```
|
6 |
+
Video-3D-LLM # project root
|
7 |
+
├── data
|
8 |
+
│ ├── scannet
|
9 |
+
│ │ ├── scans
|
10 |
+
│ │ ├── posed_images
|
11 |
+
│ │ ├── pcd_with_object_aabbs
|
12 |
+
│ │ └── mask
|
13 |
+
│ ├── embodiedscan
|
14 |
+
│ │ ├── embodiedscan_infos_train.pkl
|
15 |
+
│ │ ├── embodiedscan_infos_val.pkl
|
16 |
+
│ │ └── embodiedscan_infos_test.pkl
|
17 |
+
│ ├── metadata
|
18 |
+
│ │ ├── scannet_select_frames.json
|
19 |
+
│ │ ├── pcd_discrete_0.1.pkl
|
20 |
+
│ │ ├── scannet_train_gt_box.json
|
21 |
+
│ │ └── scannet_val_pred_box.json
|
22 |
+
│ ├── prcoessed
|
23 |
+
│ │ ├── multi3drefer_train_llava_style.json
|
24 |
+
│ │ ├── multi3drefer_val_llava_style.json
|
25 |
+
│ │ ├── ...
|
26 |
+
```
|
27 |
+
### ScanNet v2
|
28 |
+
1. Download the ScanNet v2 dataset [here](http://www.scan-net.org/). The folder of ScanNet should look like
|
29 |
+
```
|
30 |
+
Video-3D-LLM # project root
|
31 |
+
├── data
|
32 |
+
│ ├── scannet
|
33 |
+
│ │ ├── scans
|
34 |
+
│ │ │ ├── [scene_id]
|
35 |
+
│ │ │ │ ├── [scene_id]_vh_clean_2.ply
|
36 |
+
│ │ │ │ ├── [scene_id]_vh_clean_2.0.010000.segs.json
|
37 |
+
│ │ │ │ ├── [scene_id].aggregation.json
|
38 |
+
│ │ │ │ ├── [scene_id].txt
|
39 |
+
│ │ │ │ └── [scene_id].sens
|
40 |
+
```
|
41 |
+
|
42 |
+
2. Extract color images, depth images and camera parameter using the following script, which is modified from [EmbodiedScan](https://github.com/OpenRobotLab/EmbodiedScan/blob/main/embodiedscan/converter/generate_image_scannet.py).
|
43 |
+
```bash
|
44 |
+
python scripts/3d/preprocessing/generate_image_scannet.py --fast
|
45 |
+
```
|
46 |
+
|
47 |
+
3. Extract point clouds for each scene.
|
48 |
+
```bash
|
49 |
+
python scripts/3d/preprocessing/extract_scannet_pcd.py
|
50 |
+
```
|
51 |
+
This will generate the point clouds and object bounding boxes for each scan.
|
52 |
+
|
53 |
+
|
54 |
+
### EmbodiedScan
|
55 |
+
Download EmbodiedScan data at this [link](https://github.com/OpenRobotLab/EmbodiedScan/tree/main/data). You need to fill out the [official form](https://docs.google.com/forms/d/e/1FAIpQLScUXEDTksGiqHZp31j7Zp7zlCNV7p_08uViwP_Nbzfn3g6hhw/viewform) to get the access to the dataset. Decompress the embodiedscan and the directory should be orgainized as
|
56 |
+
```
|
57 |
+
├── data
|
58 |
+
│ ├── metadata
|
59 |
+
│ │ ├── embodiedscan
|
60 |
+
│ │ │ ├── embodiedscan_infos_train.pkl
|
61 |
+
│ │ │ ├── embodiedscan_infos_val.pkl
|
62 |
+
│ │ │ └── embodiedscan_infos_test.pkl
|
63 |
+
```
|
64 |
+
|
65 |
+
### Meta Information
|
66 |
+
1. Prepare the object proposals. For training set, we directly use the ground truth via the following command.
|
67 |
+
```bash
|
68 |
+
python scripts/3d/preprocessing/extract_gt_box.py
|
69 |
+
```
|
70 |
+
For the validation set, we utilize the object proposals detected by Mask3D. LEO provided the corresponding annotation results [here](https://huggingface.co/datasets/huangjy-pku/LEO_data/blob/main/mask.zip). We place it at `data/scannet/mask` and process it using the following script.
|
71 |
+
```bash
|
72 |
+
python scripts/3d/preprocessing/extract_pred_box.py
|
73 |
+
```
|
74 |
+
|
75 |
+
2. Prepare the maximum coverage sampling. Firstly we need to preprocess the voxel for each scan for maximum coverage sampling. The results will be saved at `data/metadata/pcd_discrete_0.1.pkl`.
|
76 |
+
```bash
|
77 |
+
python scripts/3d/preprocessing/convert_pcd_to_voxel.py
|
78 |
+
```
|
79 |
+
And then we perform the maximum coverage sampling offiline, and the results will be saved at `data/metadata/scannet_select_frames.json`.
|
80 |
+
```
|
81 |
+
python scripts/3d/preprocessing/max_coverage_sampling.py
|
82 |
+
```
|
83 |
+
|
84 |
+
### Downstream Benchmarks
|
85 |
+
1. SQA3D: Download the [SQA3D](https://github.com/SilongYong/SQA3D?tab=readme-ov-file) and convert the annotation to the LLaVA format using the following script.
|
86 |
+
```bash
|
87 |
+
python scripts/3d/preprocessing/process_sqa3d.py
|
88 |
+
```
|
89 |
+
|
90 |
+
2. ScanQA: Download the [ScanQA](https://github.com/ATR-DBI/ScanQA/blob/main/docs/dataset.md) and convert the annotation using the following script.
|
91 |
+
```bash
|
92 |
+
python scripts/3d/preprocessing/process_scanqa.py
|
93 |
+
```
|
94 |
+
|
95 |
+
3. ScanRefer: Download the [ScanRefer](https://daveredrum.github.io/ScanRefer/), and then run the following command.
|
96 |
+
```bash
|
97 |
+
python scripts/3d/preprocessing/process_scanrefer.py
|
98 |
+
```
|
99 |
+
|
100 |
+
4. Scan2Cap: Convert the annotation of ScanRefer to Scan2Cap.
|
101 |
+
```bash
|
102 |
+
python scripts/3d/preprocessing/process_scan2cap.py
|
103 |
+
```
|
104 |
+
|
105 |
+
5. Multi3DRefer: Download the [Multi3DRefer](https://github.com/3dlg-hcvc/M3DRef-CLIP).
|
106 |
+
```bash
|
107 |
+
python scripts/3d/preprocessing/process_multi3drefer.py
|
108 |
+
```
|