Spaces:
Running
on
Zero
Running
on
Zero
NIRVANALAN
commited on
Commit
•
8cf9a95
1
Parent(s):
12f0dd0
update
Browse files
app.py
CHANGED
@@ -364,7 +364,7 @@ def main(args):
|
|
364 |
|
365 |
## How does it work?
|
366 |
|
367 |
-
LN3Diff is a
|
368 |
Compared to SDS-based ([DreamFusion](https://dreamfusion3d.github.io/)), mulit-view generation-based ([MVDream](https://arxiv.org/abs/2308.16512), [Zero123++](https://github.com/SUDO-AI-3D/zero123plus), [Instant3D](https://instant-3d.github.io/)) and feedforward 3D reconstruction-based ([LRM](https://yiconghong.me/LRM/), [InstantMesh](https://github.com/TencentARC/InstantMesh), [LGM](https://github.com/3DTopia/LGM)),
|
369 |
LN3Diff supports feedforward 3D generation with a unified framework.
|
370 |
Like 2D/Video AIGC pipeline, LN3Diff first trains a 3D-VAE and then conduct LDM training (text/image conditioned) on the learned latent space. Some related methods from the industry ([Shape-E](https://github.com/openai/shap-e), [CLAY](https://github.com/CLAY-3D/OpenCLAY), [Meta 3D Gen](https://arxiv.org/abs/2303.05371)) also follow the same paradigm.
|
|
|
364 |
|
365 |
## How does it work?
|
366 |
|
367 |
+
LN3Diff is a native 3D Latent Diffusion Model that supports direct 3D asset generation via diffusion sampling.
|
368 |
Compared to SDS-based ([DreamFusion](https://dreamfusion3d.github.io/)), mulit-view generation-based ([MVDream](https://arxiv.org/abs/2308.16512), [Zero123++](https://github.com/SUDO-AI-3D/zero123plus), [Instant3D](https://instant-3d.github.io/)) and feedforward 3D reconstruction-based ([LRM](https://yiconghong.me/LRM/), [InstantMesh](https://github.com/TencentARC/InstantMesh), [LGM](https://github.com/3DTopia/LGM)),
|
369 |
LN3Diff supports feedforward 3D generation with a unified framework.
|
370 |
Like 2D/Video AIGC pipeline, LN3Diff first trains a 3D-VAE and then conduct LDM training (text/image conditioned) on the learned latent space. Some related methods from the industry ([Shape-E](https://github.com/openai/shap-e), [CLAY](https://github.com/CLAY-3D/OpenCLAY), [Meta 3D Gen](https://arxiv.org/abs/2303.05371)) also follow the same paradigm.
|