File size: 1,817 Bytes
d910d42
 
 
 
 
 
1b74b5e
9951234
d910d42
 
 
 
7c3570f
 
 
 
 
 
 
9193de0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7c3570f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
title: FastSAM
emoji: 🐠
colorFrom: pink
colorTo: indigo
sdk: gradio
sdk_version: 4.36.1
app_file: app_gradio.py
pinned: false
license: apache-2.0
---

# Fast Segment Anything

Official PyTorch Implementation of the <a href="https://github.com/CASIA-IVA-Lab/FastSAM">.

The **Fast Segment Anything Model(FastSAM)** is a CNN Segment Anything Model trained by only 2% of the SA-1B dataset published by SAM authors. The FastSAM achieve a comparable performance with
the SAM method at **50× higher run-time speed**.

## Local Setup (Anaconda Environment Recommended)

* Create a new conda environment

```
conda create -n fastsam python=3.11
```

* Install PyTorch 2.5.0 with CUDA 12.4

```
conda install pytorch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 pytorch-cuda=12.4 -c pytorch -c nvidia
```

* Install rest of the requirements

```
pip install -r requirements.txt
```


## License

The model is licensed under the [Apache 2.0 license](LICENSE).


## Acknowledgement

- [Segment Anything](https://segment-anything.com/) provides the SA-1B dataset and the base codes.
- [YOLOv8](https://github.com/ultralytics/ultralytics) provides codes and pre-trained models.
- [YOLACT](https://arxiv.org/abs/2112.10003) provides powerful instance segmentation method.
- [Grounded-Segment-Anything](https://huggingface.co/spaces/yizhangliu/Grounded-Segment-Anything) provides a useful web demo template.

## Citing FastSAM

If you find this project useful for your research, please consider citing the following BibTeX entry.

```
@misc{zhao2023fast,
      title={Fast Segment Anything}, 
      author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},
      year={2023},
      eprint={2306.12156},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```