Spaces:
Running
on
Zero
Running
on
Zero
JianyuanWang
commited on
Commit
•
ef862e7
1
Parent(s):
1bfa5fd
debug
Browse files
app.py
CHANGED
@@ -104,6 +104,9 @@ def vggsfm_demo(
|
|
104 |
glbfile = target_dir + "/glbscene.glb"
|
105 |
glbscene.export(file_obj=glbfile)
|
106 |
|
|
|
|
|
|
|
107 |
|
108 |
print(input_image)
|
109 |
print(input_video)
|
@@ -199,7 +202,7 @@ with gr.Blocks() as demo:
|
|
199 |
<li>upload the images (.jpg, .png, etc.), or </li>
|
200 |
<li>upload a video (.mp4, .mov, etc.) </li>
|
201 |
</ul>
|
202 |
-
<p>The reconstruction should take <strong> up to
|
203 |
<p>SfM methods are designed for <strong> rigid/static reconstruction </strong>. When dealing with dynamic/moving inputs, these methods may still work by focusing on the rigid parts of the scene. However, to ensure high-quality results, it is better to minimize the presence of moving objects in the input data. </p>
|
204 |
<p>If you meet any problem, feel free to create an issue in our <a href="https://github.com/facebookresearch/vggsfm" target="_blank">GitHub Repo</a> ⭐</p>
|
205 |
<p>(Please note that running reconstruction on Hugging Face space is slower than on a local machine.) </p>
|
|
|
104 |
glbfile = target_dir + "/glbscene.glb"
|
105 |
glbscene.export(file_obj=glbfile)
|
106 |
|
107 |
+
del predictions
|
108 |
+
gc.collect()
|
109 |
+
torch.cuda.empty_cache()
|
110 |
|
111 |
print(input_image)
|
112 |
print(input_video)
|
|
|
202 |
<li>upload the images (.jpg, .png, etc.), or </li>
|
203 |
<li>upload a video (.mp4, .mov, etc.) </li>
|
204 |
</ul>
|
205 |
+
<p>The reconstruction should normally take <strong> up to 90 second </strong>. If both images and videos are uploaded, the demo will only reconstruct the uploaded images. By default, we extract <strong> 1 image frame per second from the input video </strong>. To prevent crashes on the Hugging Face space, we currently limit reconstruction to the first 20 image frames. </p>
|
206 |
<p>SfM methods are designed for <strong> rigid/static reconstruction </strong>. When dealing with dynamic/moving inputs, these methods may still work by focusing on the rigid parts of the scene. However, to ensure high-quality results, it is better to minimize the presence of moving objects in the input data. </p>
|
207 |
<p>If you meet any problem, feel free to create an issue in our <a href="https://github.com/facebookresearch/vggsfm" target="_blank">GitHub Repo</a> ⭐</p>
|
208 |
<p>(Please note that running reconstruction on Hugging Face space is slower than on a local machine.) </p>
|