|
---
|
|
language: en
|
|
license: apache-2.0
|
|
tags:
|
|
- open-vocabulary
|
|
- semantic-segmentation
|
|
base_model:
|
|
- timm/vit_large_patch14_dinov2.lvd142m
|
|
- timm/vit_base_patch14_dinov2.lvd142m
|
|
---
|
|
|
|
<div align="center">
|
|
<h2>
|
|
<span style="color: #FF0078;">Free</span><span style="color: #00509A;">DA</span>: Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation (CVPR 2024) <br>
|
|
</h2>
|
|
<p></p>
|
|
|
|
<p></p>
|
|
|
|
<h3>
|
|
<a href="https://lucabarsellotti.github.io//">Luca Barsellotti*</a> 
|
|
<a href="https://www.robertoamoroso.it//">Roberto Amoroso*</a> 
|
|
<a href="https://aimagelab.ing.unimore.it/imagelab/person.asp?idpersona=90/">Marcella Cornia</a> 
|
|
<a href="https://www.lorenzobaraldi.com//">Lorenzo Baraldi</a> 
|
|
<a href="https://aimagelab.ing.unimore.it/imagelab/person.asp?idpersona=1">Rita Cucchiara</a> 
|
|
</h3>
|
|
|
|
|
|
[Project Page](https://aimagelab.github.io/freeda/) | [Paper](https://arxiv.org/abs/2404.06542) | [Code](https://github.com/aimagelab/freeda)
|
|
|
|
</div>
|
|
|
|
<div align="center">
|
|
<figure>
|
|
<img alt="Qualitative results" src="./src/assets/qualitatives1.png">
|
|
</figure>
|
|
</div>
|
|
|
|
## Method
|
|
|
|
<div align="center">
|
|
<figure>
|
|
<img alt="FreeDA method" src="./src/assets/inference.png">
|
|
</figure>
|
|
</div>
|
|
|
|
<br/>
|
|
|
|
<details>
|
|
<summary> Additional qualitative examples </summary>
|
|
<p align="center">
|
|
<img alt="Additional qualitative results" src="./src/assets/qualitatives.png" width="800" />
|
|
</p>
|
|
</details>
|
|
|
|
<details>
|
|
<summary> Additional examples <i>in-the-wild</i> </summary>
|
|
<p align="center">
|
|
<img alt="In-the-wild examples" src="./src/assets/into_the_wild.png" width="800" />
|
|
</p>
|
|
</details>
|
|
|
|
## Installation
|
|
|
|
```
|
|
conda create --name freeda python=3.9
|
|
conda activate freeda
|
|
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
## How to use
|
|
|
|
```python
|
|
import freeda
|
|
from PIL import Image
|
|
import requests
|
|
from io import BytesIO
|
|
|
|
if __name__ == "__main__":
|
|
fr = freeda.load("dinov2_vitb_clip_vitb")
|
|
response1 = requests.get("https://farm9.staticflickr.com/8306/7926031760_b313dca06a_z.jpg")
|
|
img1 = Image.open(BytesIO(response1.content))
|
|
response2 = requests.get("https://farm3.staticflickr.com/2207/2157810040_4883738d2d_z.jpg")
|
|
img2 = Image.open(BytesIO(response2.content))
|
|
fr.set_categories(["cat", "table", "pen", "keyboard", "toilet", "wall"])
|
|
fr.set_images([img1, img2])
|
|
segmentation = fr()
|
|
fr.visualize(segmentation, ["plot.png", "plot1.png"])
|
|
```
|
|
|
|
If you find FreeDA useful for your work please cite:
|
|
```
|
|
@inproceedings{barsellotti2024training
|
|
title={Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation},
|
|
author={Barsellotti, Luca and Amoroso, Roberto and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
|
|
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
|
year={2024}
|
|
}
|
|
``` |