estellea commited on
Commit
b6cf992
1 Parent(s): 80cf1d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -18
README.md CHANGED
@@ -16,13 +16,8 @@ The LDM3D model was proposed in ["LDM3D: Latent Diffusion Model for 3D"](https:/
16
 
17
  LDM3D got accepted to [CVPRW'23]([https://aaai.org/Conferences/AAAI-23/](https://cvpr2023.thecvf.com/)).
18
 
19
-
20
-
21
- These datasets were augmented using [Text2Light](https://frozenburning.github.io/projects/text2light/) to create a dataset containing 13852 training samples and 1606 validation samples.
22
-
23
- In order to generate the depth map of those samples, we used [DPT-large](https://github.com/isl-org/MiDaS) and to generate the caption we used [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2)
24
-
25
- A demo using this checkpoint has been open sourced in [this space](https://huggingface.co/spaces/Intel/ldm3d)
26
 
27
  ## Model description
28
 
@@ -36,7 +31,7 @@ This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that genera
36
 
37
  You can use this model to generate RGB and depth map given a text prompt.
38
  A short video summarizing the approach can be found at [this url](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0).
39
-
40
 
41
  ### How to use
42
 
@@ -63,16 +58,6 @@ This is the result:
63
  ![ldm3d_results](ldm3d_pano_results.png)
64
 
65
 
66
- ### Limitations and bias
67
-
68
- For the image generation, limitations and bias are the same as the ones from [Stable diffusion](https://huggingface.co/CompVis/stable-diffusion-v1-4#limitations)
69
- For the depth map generation, a first limitiation is that we are using DPT-large to produce the ground truth, hence, other limitations and bias are the same as the ones from [DPT](https://huggingface.co/Intel/dpt-large).
70
-
71
-
72
- ## Training data
73
-
74
- The LDM3D model was finetuned on a dataset constructed from a subset of the LAION-400M dataset, a large-scale image-caption dataset that contains over 400 million image-caption pairs.
75
-
76
  ### Finetuning
77
 
78
  This checkpoint finetunes the previous [ldm3d-4c](https://huggingface.co/Intel/ldm3d-4c) on 2 panoramic-images datasets:
 
16
 
17
  LDM3D got accepted to [CVPRW'23]([https://aaai.org/Conferences/AAAI-23/](https://cvpr2023.thecvf.com/)).
18
 
19
+ This checkpoint has been finetuned on panoramic images (see how we finetuned below)
20
+ A demo using this checkpoint has been open-sourced in [this space](https://huggingface.co/spaces/Intel/ldm3d)
 
 
 
 
 
21
 
22
  ## Model description
23
 
 
31
 
32
  You can use this model to generate RGB and depth map given a text prompt.
33
  A short video summarizing the approach can be found at [this url](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0).
34
+ A demo is also accessible on [Spaces](https://huggingface.co/spaces/Intel/ldm3d)
35
 
36
  ### How to use
37
 
 
58
  ![ldm3d_results](ldm3d_pano_results.png)
59
 
60
 
 
 
 
 
 
 
 
 
 
 
61
  ### Finetuning
62
 
63
  This checkpoint finetunes the previous [ldm3d-4c](https://huggingface.co/Intel/ldm3d-4c) on 2 panoramic-images datasets: