schirrmacher
commited on
Upload folder using huggingface_hub
Browse files- .DS_Store +0 -0
- .gitattributes +3 -14
- README.md +3 -63
- app.py +7 -5
- example01.jpeg +3 -0
- example02.jpeg +3 -0
- example03.jpeg +3 -0
.DS_Store
CHANGED
Binary files a/.DS_Store and b/.DS_Store differ
|
|
.gitattributes
CHANGED
@@ -33,17 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
dataset/training/im/p_00a4eda7.png filter=lfs diff=lfs merge=lfs -text
|
40 |
-
dataset/training/im/p_00a5b702.png filter=lfs diff=lfs merge=lfs -text
|
41 |
-
dataset/validation/im/p_00a7a27c.png filter=lfs diff=lfs merge=lfs -text
|
42 |
-
examples/image/example01.jpeg filter=lfs diff=lfs merge=lfs -text
|
43 |
-
examples/image/example02.jpeg filter=lfs diff=lfs merge=lfs -text
|
44 |
-
examples/image/example03.jpeg filter=lfs diff=lfs merge=lfs -text
|
45 |
-
examples/image/image01.png filter=lfs diff=lfs merge=lfs -text
|
46 |
-
examples/image/image01_no_background.png filter=lfs diff=lfs merge=lfs -text
|
47 |
-
hf_space/example01.jpeg filter=lfs diff=lfs merge=lfs -text
|
48 |
-
hf_space/example02.jpeg filter=lfs diff=lfs merge=lfs -text
|
49 |
-
hf_space/example03.jpeg filter=lfs diff=lfs merge=lfs -text
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
example01.jpeg filter=lfs diff=lfs merge=lfs -text
|
37 |
+
example02.jpeg filter=lfs diff=lfs merge=lfs -text
|
38 |
+
example03.jpeg filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -1,73 +1,13 @@
|
|
1 |
---
|
2 |
title: Open Remove Background Model (ormbg)
|
3 |
-
license: apache-2.0
|
4 |
-
tags:
|
5 |
-
- segmentation
|
6 |
-
- remove background
|
7 |
-
- background
|
8 |
-
- background-removal
|
9 |
-
- Pytorch
|
10 |
-
pretty_name: Open Remove Background Model
|
11 |
-
models:
|
12 |
-
- schirrmacher/ormbg
|
13 |
-
datasets:
|
14 |
-
- schirrmacher/humans
|
15 |
emoji: 💻
|
16 |
colorFrom: red
|
17 |
colorTo: red
|
18 |
sdk: gradio
|
19 |
sdk_version: 4.29.0
|
20 |
-
app_file:
|
21 |
pinned: false
|
|
|
22 |
---
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
[>>> DEMO <<<](https://huggingface.co/spaces/schirrmacher/ormbg)
|
27 |
-
|
28 |
-
Join our [Research Discord Group](https://discord.gg/YYZ3D66t)!
|
29 |
-
|
30 |
-
![](examples/image/image01_no_background.png)
|
31 |
-
|
32 |
-
This model is a **fully open-source background remover** optimized for images with humans. It is based on [Highly Accurate Dichotomous Image Segmentation research](https://github.com/xuebinqin/DIS). The model was trained with the synthetic [Human Segmentation Dataset](https://huggingface.co/datasets/schirrmacher/humans), [P3M-10k](https://paperswithcode.com/dataset/p3m-10k), [PPM-100](https://github.com/ZHKKKe/PPM) and [AIM-500](https://paperswithcode.com/dataset/aim-500).
|
33 |
-
|
34 |
-
This model is similar to [RMBG-1.4](https://huggingface.co/briaai/RMBG-1.4), but with open training data/process and commercially free to use.
|
35 |
-
|
36 |
-
## Inference
|
37 |
-
|
38 |
-
```
|
39 |
-
python ormbg/inference.py
|
40 |
-
```
|
41 |
-
|
42 |
-
## Training
|
43 |
-
|
44 |
-
Install dependencies:
|
45 |
-
|
46 |
-
```
|
47 |
-
conda env create -f environment.yaml
|
48 |
-
conda activate ormbg
|
49 |
-
```
|
50 |
-
|
51 |
-
Replace dummy dataset with [training dataset](https://huggingface.co/datasets/schirrmacher/humans).
|
52 |
-
|
53 |
-
```
|
54 |
-
python3 ormbg/train_model.py
|
55 |
-
```
|
56 |
-
|
57 |
-
# Research
|
58 |
-
|
59 |
-
I started training the model with synthetic images of the [Human Segmentation Dataset](https://huggingface.co/datasets/schirrmacher/humans) crafted with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse). However, I noticed that the model struggles to perform well on real images.
|
60 |
-
|
61 |
-
Synthetic datasets have limitations for achieving great segmentation results. This is because artificial lighting, occlusion, scale or backgrounds create a gap between synthetic and real images. A "model trained solely on synthetic data generated with naïve domain randomization struggles to generalize on the real domain", see [PEOPLESANSPEOPLE: A Synthetic Data Generator for Human-Centric Computer Vision (2022)](https://arxiv.org/pdf/2112.09290).
|
62 |
-
|
63 |
-
Latest changes (05/07/2024):
|
64 |
-
|
65 |
-
- Added [P3M-10K](https://paperswithcode.com/dataset/p3m-10k) dataset for training and validation
|
66 |
-
- Added [AIM-500](https://paperswithcode.com/dataset/aim-500) dataset for training and validation
|
67 |
-
- Added [PPM-100](https://github.com/ZHKKKe/PPM) dataset for training and validation
|
68 |
-
- Applied [Grid Dropout](https://albumentations.ai/docs/api_reference/augmentations/dropout/grid_dropout/) to make the model smarter
|
69 |
-
|
70 |
-
Next steps:
|
71 |
-
|
72 |
-
- Expand dataset with synthetic and real images
|
73 |
-
- Research on multi-step segmentation/matting by incorporating [ViTMatte](https://github.com/hustvl/ViTMatte)
|
|
|
1 |
---
|
2 |
title: Open Remove Background Model (ormbg)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
emoji: 💻
|
4 |
colorFrom: red
|
5 |
colorTo: red
|
6 |
sdk: gradio
|
7 |
sdk_version: 4.29.0
|
8 |
+
app_file: app.py
|
9 |
pinned: false
|
10 |
+
license: apache-2.0
|
11 |
---
|
12 |
|
13 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
app.py
CHANGED
@@ -6,7 +6,7 @@ import gradio as gr
|
|
6 |
from ormbg import ORMBG
|
7 |
from PIL import Image
|
8 |
|
9 |
-
model_path = "models/ormbg.pth"
|
10 |
|
11 |
# Load the model globally but don't send to device yet
|
12 |
net = ORMBG()
|
@@ -70,9 +70,9 @@ If you identify cases where the model fails, <a href='https://huggingface.co/sch
|
|
70 |
"""
|
71 |
|
72 |
examples = [
|
73 |
-
"
|
74 |
-
"
|
75 |
-
"
|
76 |
]
|
77 |
|
78 |
demo = gr.Interface(
|
@@ -85,4 +85,6 @@ demo = gr.Interface(
|
|
85 |
)
|
86 |
|
87 |
if __name__ == "__main__":
|
88 |
-
demo.launch(
|
|
|
|
|
|
6 |
from ormbg import ORMBG
|
7 |
from PIL import Image
|
8 |
|
9 |
+
model_path = "../models/ormbg.pth"
|
10 |
|
11 |
# Load the model globally but don't send to device yet
|
12 |
net = ORMBG()
|
|
|
70 |
"""
|
71 |
|
72 |
examples = [
|
73 |
+
"example1.jpeg",
|
74 |
+
"example2.jpeg",
|
75 |
+
"example3.jpeg",
|
76 |
]
|
77 |
|
78 |
demo = gr.Interface(
|
|
|
85 |
)
|
86 |
|
87 |
if __name__ == "__main__":
|
88 |
+
demo.launch(
|
89 |
+
share=False, root_path="../", allowed_paths=["../hf_space", "../models"]
|
90 |
+
)
|
example01.jpeg
ADDED
Git LFS Details
|
example02.jpeg
ADDED
Git LFS Details
|
example03.jpeg
ADDED
Git LFS Details
|