File size: 4,278 Bytes
7f51798
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
# GaussianAnything: arXiv 2024

## setup the environment (the same env as LN3Diff)

```bash
conda create -n ga python=3.10
conda activate ga
pip intall -r requrements.txt # will install the surfel Gaussians environments automatically.
```

Then, install pytorch3d with 
```bash
pip install git+https://github.com/facebookresearch/pytorch3d.git@stable
```


### :dromedary_camel: TODO

- [x] Release inference code and checkpoints.
- [x] Release Training code.
- [x] Release pre-extracted latent codes for 3D diffusion training.
- [ ] Release Gradio Demo.
- [ ] Release the evaluation code.
- [ ] Lint the code.


# Inference

Be aware to change the $logdir in the bash file accordingly.

To load the checkpoint automatically: please replace ```/mnt/sfs-common/yslan/open-source``` with ```yslan/GaussianAnything/ckpts/checkpoints```.



## Text-2-3D:

Please update the caption for 3D generation in ```datasets/caption-forpaper.txt```. T o change the number of samples to be generated, please change ```$num_samples``` in the bash file.

**stage-1**:
```
bash shell_scripts/release/inference/t23d/stage1-t23d.sh
```
then, set the ```$stage_1_output_dir``` to the ```$logdir``` of the above stage.

**stage-2**: 
```
bash shell_scripts/release/inference/t23d/stage2-t23d.sh
```

The results will be dumped to ```./logs/t23d/stage-2```

## I23D (requires two stage generation):

set the $data_dir accordingly. For some demo image, please download from [huggingfac.co/yslan/GaussianAnything/demo-img](https://huggingface.co/yslan/GaussianAnything/tree/main/demo-img).

**stage-1**:
```
bash shell_scripts/release/inference/i23d/i23d-stage1.sh
```

then, set the $stage_1_output_dir to the $logdir of the above stage.

**stage-2**: 
```
bash shell_scripts/release/inference/i23d/i23d-stage1.sh
```

## 3D VAE Reconstruction:

To encode a 3D asset into the latent point cloud, please download the pre-trained VAE checkpoint from [huggingfac.co/yslan/gaussiananything/ckpts/vae/model_rec1965000.pt](https://huggingface.co/yslan/GaussianAnything/blob/main/ckpts/vae/model_rec1965000.pt) to ```./checkpoint/model_rec1965000.pt```.

Then, run the inference script

```bash 
bash shell_scripts/release/inference/vae-3d.sh
```

This will encode the mulit-view 3D renderings in ```./assets/demo-image-for-i23d/for-vae-reconstruction/Animals/0``` into the point-cloud structured latent code, and export them (along with the 2dgs mesh) in ```./logs/latent_dir/```. The exported latent code will be used for efficient 3D diffusion training.



# Training (Flow Matching 3D Generation)
All the training is conducted on 8 A100 (80GiB) with BF16 enabled. For training on V100, please use FP32 training by setting ```--use_amp``` False in the bash file. Feel free to tune the ```$batch_size``` in the bash file accordingly to match your VRAM.

To facilitate reproducing the performance, we have uploaded the pre-extracted poind cloud-structured latent codes to the [huggingfac.co/yslan/gaussiananything/dataset/latent.tar.gz](https://huggingface.co/yslan/GaussianAnything/blob/main/dataset/latent.tar.gz) (34GiB required). Please download the pre extracted point cloud latent codes, unzip and set the ```$mv_latent_dir``` in the bash file accordingly.


## Text to 3D:
Please donwload the 3D caption from hugging face [huggingfac.co/yslan/GaussianAnything/dataset/text_captions_3dtopia.json](https://huggingface.co/yslan/GaussianAnything/blob/main/dataset/text_captions_3dtopia.json), and put it under ```dataset```.


Note that if you want to train a specific class of Objaverse, just manually change the code at ```datasets/g_buffer_objaverse.py:3043```.

**stage-1 training (point cloud generation)**:

```
bash shell_scripts/release/train/stage2-t23d/t23d-pcd-gen.sh
```

**stage-2 training (point cloud-conditioned KL feature generation)**:

```
bash shell_scripts/release/train/stage2-t23d/t23d-klfeat-gen.sh
```

## (single-view) Image to 3D
Please download g-buffer dataset first.

**stage-1 training (point cloud generation)**:

```
bash shell_scripts/release/train/stage2-i23d/i23d-pcd-gen.sh
```

**stage-2 training (point cloud-conditioned KL feature generation)**:

```
bash shell_scripts/release/train/stage2-i23d/i23d-klfeat-gen.sh
```

<!-- # Training (3D-aware VAE)
Since the  -->