LiruiZhao commited on
Commit
30e6374
β€’
2 Parent(s): a1fdc74 02e43eb

Merge branch 'main' of https://huggingface.co/spaces/LiruiZhao/Diffree into main

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +22 -1
  3. app.py +2 -2
  4. video_demo.mp4 +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ video_demo.mp4 filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -10,4 +10,25 @@ pinned: false
10
  license: mit
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  license: mit
11
  ---
12
 
13
+ # Diffree
14
+
15
+ <p align="center">
16
+ <a href="https://arxiv.org/pdf/2407.16982"><u>[πŸ“œ Arxiv]</u></a>
17
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
18
+ <a href="https://github.com/OpenGVLab/Diffree"><u>[πŸ” Code]</u></a>
19
+ </p>
20
+
21
+ [Diffree](https://arxiv.org/pdf/2407.16982) is a diffusion model that enables the addition of new objects to images using only text descriptions, seamlessly integrating them with consistent background and spatial context.
22
+
23
+ In this repo, we provide the [πŸ€— Hugging Face demo](https://huggingface.co/spaces/LiruiZhao/Diffree) for Diffree, and you can also download our model via [πŸ€— Checkpoint](https://huggingface.co/LiruiZhao/Diffree).
24
+
25
+ ## Citation
26
+ If you found this work useful, please consider citing:
27
+ ```
28
+ @article{zhao2024diffree,
29
+ title={Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model},
30
+ author={Zhao, Lirui and Yang, Tianshuo and Shao, Wenqi and Zhang, Yuxin and Qiao, Yu and Luo, Ping and Zhang, Kaipeng and Ji, Rongrong},
31
+ journal={arXiv preprint arXiv:2407.16982},
32
+ year={2024}
33
+ }
34
+ ```
app.py CHANGED
@@ -1,5 +1,6 @@
1
  from __future__ import annotations
2
 
 
3
  import math
4
  import random
5
  import sys
@@ -18,7 +19,6 @@ from PIL import Image, ImageOps, ImageFilter
18
  from torch import autocast
19
  import cv2
20
  import imageio
21
- import spaces
22
 
23
  sys.path.append("./stable_diffusion")
24
 
@@ -351,7 +351,7 @@ with gr.Blocks(css="footer {visibility: hidden}") as demo:
351
  ["Show Image Video", "Close Image Video"],
352
  value="Close Image Video",
353
  type="index",
354
- label="Image Generation Process Selection ()",
355
  interactive=True,
356
  )
357
  decode_image_batch = gr.Number(value=10, precision=0, label="Decode Image Batch (<steps)", interactive=True)
 
1
  from __future__ import annotations
2
 
3
+ import spaces
4
  import math
5
  import random
6
  import sys
 
19
  from torch import autocast
20
  import cv2
21
  import imageio
 
22
 
23
  sys.path.append("./stable_diffusion")
24
 
 
351
  ["Show Image Video", "Close Image Video"],
352
  value="Close Image Video",
353
  type="index",
354
+ label="Image Generation Process Selection (close for faster generation)",
355
  interactive=True,
356
  )
357
  decode_image_batch = gr.Number(value=10, precision=0, label="Decode Image Batch (<steps)", interactive=True)
video_demo.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4f71dce37b7e62ad467ec5d24004e8714be7e76bf634cd610c1935b03501ca6
3
+ size 32058066