jadechoghari
commited on
Commit
•
a0788d8
1
Parent(s):
7871a82
Update README.md
Browse files
README.md
CHANGED
@@ -14,55 +14,38 @@ European Conference on Computer Vision (ECCV), 2024
|
|
14 |
|
15 |
- [25.07.2024] Release weights and inference code for VFusion3D.
|
16 |
|
17 |
-
## Results and Comparisons
|
18 |
-
|
19 |
-
### 3D Generation Results
|
20 |
-
<img src='https://github.com/facebookresearch/vfusion3d/blob/main/images/user.png' width=950>
|
21 |
|
22 |
-
<img src='https://github.com/facebookresearch/vfusion3d/blob/main/images/gif2.gif' width=950>
|
23 |
|
24 |
-
|
25 |
-
<img src='https://github.com/facebookresearch/vfusion3d/blob/main/images/user.png' width=950>
|
26 |
|
|
|
27 |
|
28 |
-
|
|
|
|
|
29 |
|
30 |
-
|
31 |
-
```
|
32 |
-
git clone https://github.com/facebookresearch/vfusion3d
|
33 |
-
cd vfusion3d
|
34 |
```
|
35 |
|
36 |
-
###
|
37 |
-
|
|
|
38 |
|
|
|
39 |
```
|
40 |
-
source install.sh
|
41 |
-
```
|
42 |
-
|
43 |
-
## Quick Start
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
48 |
|
|
|
|
|
49 |
|
50 |
-
|
51 |
-
- We put some sample inputs under `assets/40_prompt_images`, which is the 40 MVDream prompt images used in the paper. Results of them are also provided under `results/40_prompt_images_provided`.
|
52 |
|
53 |
-
###
|
54 |
-
|
55 |
-
- You may specify which form of output to generate by setting the flags `--export_video` and `--export_mesh`.
|
56 |
-
- Change `--source_path` and `--dump_path` if you want to run it on other image folders.
|
57 |
|
58 |
-
```
|
59 |
-
# Example usages
|
60 |
-
# Render a video
|
61 |
-
python -m lrm.inferrer --export_video --resume ./checkpoints/vfusion3dckpt
|
62 |
-
|
63 |
-
# Export mesh
|
64 |
-
python -m lrm.inferrer --export_mesh --resume ./checkpoints/vfusion3dckpt
|
65 |
-
```
|
66 |
|
67 |
|
68 |
## Acknowledgement
|
|
|
14 |
|
15 |
- [25.07.2024] Release weights and inference code for VFusion3D.
|
16 |
|
|
|
|
|
|
|
|
|
17 |
|
|
|
18 |
|
19 |
+
## Quick Start
|
|
|
20 |
|
21 |
+
Getting started with VFusion3D is super easy! 🤗 Here’s how you can use the model with Hugging Face:
|
22 |
|
23 |
+
### Use a pipeline as a high-level helper
|
24 |
+
```python
|
25 |
+
from transformers import pipeline
|
26 |
|
27 |
+
pipe = pipeline("feature-extraction", model="jadechoghari/vfusion3d", trust_remote_code=True)
|
|
|
|
|
|
|
28 |
```
|
29 |
|
30 |
+
### Load model directly
|
31 |
+
```python
|
32 |
+
from transformers import AutoModel
|
33 |
|
34 |
+
model = AutoModel.from_pretrained("jadechoghari/vfusion3d", trust_remote_code=True)
|
35 |
```
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
Check out our [demo app](#) to see VFusion3D in action! 🤗
|
38 |
|
39 |
+
## Results and Comparisons
|
40 |
|
41 |
+
### 3D Generation Results
|
42 |
+
<img src='assets/gif1.gif' width=950>
|
43 |
|
44 |
+
<img src='assets/gif2.gif' width=950>
|
|
|
45 |
|
46 |
+
### User Study Results
|
47 |
+
<img src='assets/user.png' width=950>
|
|
|
|
|
48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
|
51 |
## Acknowledgement
|