Duplicate from facebook/sam-vit-base
Browse filesCo-authored-by: Younes Belkada <ybelkada@users.noreply.huggingface.co>
- .gitattributes +34 -0
- README.md +120 -0
- config.json +249 -0
- preprocessor_config.json +28 -0
- pytorch_model.bin +3 -0
- tf_model.h5 +3 -0
.gitattributes
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
duplicated_from: facebook/sam-vit-base
|
4 |
+
---
|
5 |
+
|
6 |
+
# Model Card for Segment Anything Model (SAM) - ViT Base (ViT-B) version
|
7 |
+
|
8 |
+
<p>
|
9 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-architecture.png" alt="Model architecture">
|
10 |
+
<em> Detailed architecture of Segment Anything Model (SAM).</em>
|
11 |
+
</p>
|
12 |
+
|
13 |
+
|
14 |
+
# Table of Contents
|
15 |
+
|
16 |
+
0. [TL;DR](#TL;DR)
|
17 |
+
1. [Model Details](#model-details)
|
18 |
+
2. [Usage](#usage)
|
19 |
+
3. [Citation](#citation)
|
20 |
+
|
21 |
+
# TL;DR
|
22 |
+
|
23 |
+
|
24 |
+
[Link to original repository](https://github.com/facebookresearch/segment-anything)
|
25 |
+
|
26 |
+
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-beancans.png" alt="Snow" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-dog-masks.png" alt="Forest" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car-seg.png" alt="Mountains" width="600" height="600"> |
|
27 |
+
|---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
|
28 |
+
|
29 |
+
|
30 |
+
The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
|
31 |
+
The abstract of the paper states:
|
32 |
+
|
33 |
+
> We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision.
|
34 |
+
|
35 |
+
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything).
|
36 |
+
|
37 |
+
# Model Details
|
38 |
+
|
39 |
+
The SAM model is made up of 3 modules:
|
40 |
+
- The `VisionEncoder`: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used.
|
41 |
+
- The `PromptEncoder`: generates embeddings for points and bounding boxes
|
42 |
+
- The `MaskDecoder`: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed
|
43 |
+
- The `Neck`: predicts the output masks based on the contextualized masks produced by the `MaskDecoder`.
|
44 |
+
# Usage
|
45 |
+
|
46 |
+
|
47 |
+
## Prompted-Mask-Generation
|
48 |
+
|
49 |
+
```python
|
50 |
+
from PIL import Image
|
51 |
+
import requests
|
52 |
+
from transformers import SamModel, SamProcessor
|
53 |
+
|
54 |
+
model = SamModel.from_pretrained("facebook/sam-vit-base")
|
55 |
+
processor = SamProcessor.from_pretrained("facebook/sam-vit-base")
|
56 |
+
|
57 |
+
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
|
58 |
+
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
|
59 |
+
input_points = [[[450, 600]]] # 2D localization of a window
|
60 |
+
```
|
61 |
+
|
62 |
+
|
63 |
+
```python
|
64 |
+
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda")
|
65 |
+
outputs = model(**inputs)
|
66 |
+
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
|
67 |
+
scores = outputs.iou_scores
|
68 |
+
```
|
69 |
+
Among other arguments to generate masks, you can pass 2D locations on the approximate position of your object of interest, a bounding box wrapping the object of interest (the format should be x, y coordinate of the top right and bottom left point of the bounding box), a segmentation mask. At this time of writing, passing a text as input is not supported by the official model according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844).
|
70 |
+
For more details, refer to this notebook, which shows a walk throught of how to use the model, with a visual example!
|
71 |
+
|
72 |
+
## Automatic-Mask-Generation
|
73 |
+
|
74 |
+
The model can be used for generating segmentation masks in a "zero-shot" fashion, given an input image. The model is automatically prompt with a grid of `1024` points
|
75 |
+
which are all fed to the model.
|
76 |
+
|
77 |
+
The pipeline is made for automatic mask generation. The following snippet demonstrates how easy you can run it (on any device! Simply feed the appropriate `points_per_batch` argument)
|
78 |
+
```python
|
79 |
+
from transformers import pipeline
|
80 |
+
generator = pipeline("mask-generation", device = 0, points_per_batch = 256)
|
81 |
+
image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
|
82 |
+
outputs = generator(image_url, points_per_batch = 256)
|
83 |
+
```
|
84 |
+
Now to display the image:
|
85 |
+
```python
|
86 |
+
import matplotlib.pyplot as plt
|
87 |
+
from PIL import Image
|
88 |
+
import numpy as np
|
89 |
+
|
90 |
+
def show_mask(mask, ax, random_color=False):
|
91 |
+
if random_color:
|
92 |
+
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
|
93 |
+
else:
|
94 |
+
color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])
|
95 |
+
h, w = mask.shape[-2:]
|
96 |
+
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
|
97 |
+
ax.imshow(mask_image)
|
98 |
+
|
99 |
+
|
100 |
+
plt.imshow(np.array(raw_image))
|
101 |
+
ax = plt.gca()
|
102 |
+
for mask in outputs["masks"]:
|
103 |
+
show_mask(mask, ax=ax, random_color=True)
|
104 |
+
plt.axis("off")
|
105 |
+
plt.show()
|
106 |
+
```
|
107 |
+
|
108 |
+
|
109 |
+
# Citation
|
110 |
+
|
111 |
+
If you use this model, please use the following BibTeX entry.
|
112 |
+
|
113 |
+
```
|
114 |
+
@article{kirillov2023segany,
|
115 |
+
title={Segment Anything},
|
116 |
+
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
|
117 |
+
journal={arXiv:2304.02643},
|
118 |
+
year={2023}
|
119 |
+
}
|
120 |
+
```
|
config.json
ADDED
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_commit_hash": null,
|
3 |
+
"_name_or_path": "/tmp/facebook/sam-vit-base",
|
4 |
+
"architectures": [
|
5 |
+
"SamModel"
|
6 |
+
],
|
7 |
+
"initializer_range": 0.02,
|
8 |
+
"mask_decoder_config": {
|
9 |
+
"_name_or_path": "",
|
10 |
+
"add_cross_attention": false,
|
11 |
+
"architectures": null,
|
12 |
+
"attention_downsample_rate": 2,
|
13 |
+
"bad_words_ids": null,
|
14 |
+
"begin_suppress_tokens": null,
|
15 |
+
"bos_token_id": null,
|
16 |
+
"chunk_size_feed_forward": 0,
|
17 |
+
"cross_attention_hidden_size": null,
|
18 |
+
"decoder_start_token_id": null,
|
19 |
+
"diversity_penalty": 0.0,
|
20 |
+
"do_sample": false,
|
21 |
+
"early_stopping": false,
|
22 |
+
"encoder_no_repeat_ngram_size": 0,
|
23 |
+
"eos_token_id": null,
|
24 |
+
"exponential_decay_length_penalty": null,
|
25 |
+
"finetuning_task": null,
|
26 |
+
"forced_bos_token_id": null,
|
27 |
+
"forced_eos_token_id": null,
|
28 |
+
"hidden_act": "relu",
|
29 |
+
"hidden_size": 256,
|
30 |
+
"id2label": {
|
31 |
+
"0": "LABEL_0",
|
32 |
+
"1": "LABEL_1"
|
33 |
+
},
|
34 |
+
"iou_head_depth": 3,
|
35 |
+
"iou_head_hidden_dim": 256,
|
36 |
+
"is_decoder": false,
|
37 |
+
"is_encoder_decoder": false,
|
38 |
+
"label2id": {
|
39 |
+
"LABEL_0": 0,
|
40 |
+
"LABEL_1": 1
|
41 |
+
},
|
42 |
+
"layer_norm_eps": 1e-06,
|
43 |
+
"length_penalty": 1.0,
|
44 |
+
"max_length": 20,
|
45 |
+
"min_length": 0,
|
46 |
+
"mlp_dim": 2048,
|
47 |
+
"model_type": "",
|
48 |
+
"no_repeat_ngram_size": 0,
|
49 |
+
"num_attention_heads": 8,
|
50 |
+
"num_beam_groups": 1,
|
51 |
+
"num_beams": 1,
|
52 |
+
"num_hidden_layers": 2,
|
53 |
+
"num_multimask_outputs": 3,
|
54 |
+
"num_return_sequences": 1,
|
55 |
+
"output_attentions": false,
|
56 |
+
"output_hidden_states": false,
|
57 |
+
"output_scores": false,
|
58 |
+
"pad_token_id": null,
|
59 |
+
"prefix": null,
|
60 |
+
"problem_type": null,
|
61 |
+
"pruned_heads": {},
|
62 |
+
"remove_invalid_values": false,
|
63 |
+
"repetition_penalty": 1.0,
|
64 |
+
"return_dict": true,
|
65 |
+
"return_dict_in_generate": false,
|
66 |
+
"sep_token_id": null,
|
67 |
+
"suppress_tokens": null,
|
68 |
+
"task_specific_params": null,
|
69 |
+
"temperature": 1.0,
|
70 |
+
"tf_legacy_loss": false,
|
71 |
+
"tie_encoder_decoder": false,
|
72 |
+
"tie_word_embeddings": true,
|
73 |
+
"tokenizer_class": null,
|
74 |
+
"top_k": 50,
|
75 |
+
"top_p": 1.0,
|
76 |
+
"torch_dtype": null,
|
77 |
+
"torchscript": false,
|
78 |
+
"transformers_version": "4.29.0.dev0",
|
79 |
+
"typical_p": 1.0,
|
80 |
+
"use_bfloat16": false
|
81 |
+
},
|
82 |
+
"model_type": "sam",
|
83 |
+
"prompt_encoder_config": {
|
84 |
+
"_name_or_path": "",
|
85 |
+
"add_cross_attention": false,
|
86 |
+
"architectures": null,
|
87 |
+
"bad_words_ids": null,
|
88 |
+
"begin_suppress_tokens": null,
|
89 |
+
"bos_token_id": null,
|
90 |
+
"chunk_size_feed_forward": 0,
|
91 |
+
"cross_attention_hidden_size": null,
|
92 |
+
"decoder_start_token_id": null,
|
93 |
+
"diversity_penalty": 0.0,
|
94 |
+
"do_sample": false,
|
95 |
+
"early_stopping": false,
|
96 |
+
"encoder_no_repeat_ngram_size": 0,
|
97 |
+
"eos_token_id": null,
|
98 |
+
"exponential_decay_length_penalty": null,
|
99 |
+
"finetuning_task": null,
|
100 |
+
"forced_bos_token_id": null,
|
101 |
+
"forced_eos_token_id": null,
|
102 |
+
"hidden_act": "gelu",
|
103 |
+
"hidden_size": 256,
|
104 |
+
"id2label": {
|
105 |
+
"0": "LABEL_0",
|
106 |
+
"1": "LABEL_1"
|
107 |
+
},
|
108 |
+
"image_embedding_size": 64,
|
109 |
+
"image_size": 1024,
|
110 |
+
"is_decoder": false,
|
111 |
+
"is_encoder_decoder": false,
|
112 |
+
"label2id": {
|
113 |
+
"LABEL_0": 0,
|
114 |
+
"LABEL_1": 1
|
115 |
+
},
|
116 |
+
"layer_norm_eps": 1e-06,
|
117 |
+
"length_penalty": 1.0,
|
118 |
+
"mask_input_channels": 16,
|
119 |
+
"max_length": 20,
|
120 |
+
"min_length": 0,
|
121 |
+
"model_type": "",
|
122 |
+
"no_repeat_ngram_size": 0,
|
123 |
+
"num_beam_groups": 1,
|
124 |
+
"num_beams": 1,
|
125 |
+
"num_point_embeddings": 4,
|
126 |
+
"num_return_sequences": 1,
|
127 |
+
"output_attentions": false,
|
128 |
+
"output_hidden_states": false,
|
129 |
+
"output_scores": false,
|
130 |
+
"pad_token_id": null,
|
131 |
+
"patch_size": 16,
|
132 |
+
"prefix": null,
|
133 |
+
"problem_type": null,
|
134 |
+
"pruned_heads": {},
|
135 |
+
"remove_invalid_values": false,
|
136 |
+
"repetition_penalty": 1.0,
|
137 |
+
"return_dict": true,
|
138 |
+
"return_dict_in_generate": false,
|
139 |
+
"sep_token_id": null,
|
140 |
+
"suppress_tokens": null,
|
141 |
+
"task_specific_params": null,
|
142 |
+
"temperature": 1.0,
|
143 |
+
"tf_legacy_loss": false,
|
144 |
+
"tie_encoder_decoder": false,
|
145 |
+
"tie_word_embeddings": true,
|
146 |
+
"tokenizer_class": null,
|
147 |
+
"top_k": 50,
|
148 |
+
"top_p": 1.0,
|
149 |
+
"torch_dtype": null,
|
150 |
+
"torchscript": false,
|
151 |
+
"transformers_version": "4.29.0.dev0",
|
152 |
+
"typical_p": 1.0,
|
153 |
+
"use_bfloat16": false
|
154 |
+
},
|
155 |
+
"torch_dtype": "float32",
|
156 |
+
"transformers_version": null,
|
157 |
+
"vision_config": {
|
158 |
+
"_name_or_path": "",
|
159 |
+
"add_cross_attention": false,
|
160 |
+
"architectures": null,
|
161 |
+
"attention_dropout": 0.0,
|
162 |
+
"bad_words_ids": null,
|
163 |
+
"begin_suppress_tokens": null,
|
164 |
+
"bos_token_id": null,
|
165 |
+
"chunk_size_feed_forward": 0,
|
166 |
+
"cross_attention_hidden_size": null,
|
167 |
+
"decoder_start_token_id": null,
|
168 |
+
"diversity_penalty": 0.0,
|
169 |
+
"do_sample": false,
|
170 |
+
"dropout": 0.0,
|
171 |
+
"early_stopping": false,
|
172 |
+
"encoder_no_repeat_ngram_size": 0,
|
173 |
+
"eos_token_id": null,
|
174 |
+
"exponential_decay_length_penalty": null,
|
175 |
+
"finetuning_task": null,
|
176 |
+
"forced_bos_token_id": null,
|
177 |
+
"forced_eos_token_id": null,
|
178 |
+
"global_attn_indexes": [
|
179 |
+
2,
|
180 |
+
5,
|
181 |
+
8,
|
182 |
+
11
|
183 |
+
],
|
184 |
+
"hidden_act": "gelu",
|
185 |
+
"hidden_size": 768,
|
186 |
+
"id2label": {
|
187 |
+
"0": "LABEL_0",
|
188 |
+
"1": "LABEL_1"
|
189 |
+
},
|
190 |
+
"image_size": 1024,
|
191 |
+
"initializer_factor": 1.0,
|
192 |
+
"initializer_range": 1e-10,
|
193 |
+
"intermediate_size": 6144,
|
194 |
+
"is_decoder": false,
|
195 |
+
"is_encoder_decoder": false,
|
196 |
+
"label2id": {
|
197 |
+
"LABEL_0": 0,
|
198 |
+
"LABEL_1": 1
|
199 |
+
},
|
200 |
+
"layer_norm_eps": 1e-06,
|
201 |
+
"length_penalty": 1.0,
|
202 |
+
"max_length": 20,
|
203 |
+
"min_length": 0,
|
204 |
+
"mlp_dim": 3072,
|
205 |
+
"mlp_ratio": 4.0,
|
206 |
+
"model_type": "",
|
207 |
+
"no_repeat_ngram_size": 0,
|
208 |
+
"num_attention_heads": 12,
|
209 |
+
"num_beam_groups": 1,
|
210 |
+
"num_beams": 1,
|
211 |
+
"num_channels": 3,
|
212 |
+
"num_hidden_layers": 12,
|
213 |
+
"num_pos_feats": 128,
|
214 |
+
"num_return_sequences": 1,
|
215 |
+
"output_attentions": false,
|
216 |
+
"output_channels": 256,
|
217 |
+
"output_hidden_states": false,
|
218 |
+
"output_scores": false,
|
219 |
+
"pad_token_id": null,
|
220 |
+
"patch_size": 16,
|
221 |
+
"prefix": null,
|
222 |
+
"problem_type": null,
|
223 |
+
"projection_dim": 512,
|
224 |
+
"pruned_heads": {},
|
225 |
+
"qkv_bias": true,
|
226 |
+
"remove_invalid_values": false,
|
227 |
+
"repetition_penalty": 1.0,
|
228 |
+
"return_dict": true,
|
229 |
+
"return_dict_in_generate": false,
|
230 |
+
"sep_token_id": null,
|
231 |
+
"suppress_tokens": null,
|
232 |
+
"task_specific_params": null,
|
233 |
+
"temperature": 1.0,
|
234 |
+
"tf_legacy_loss": false,
|
235 |
+
"tie_encoder_decoder": false,
|
236 |
+
"tie_word_embeddings": true,
|
237 |
+
"tokenizer_class": null,
|
238 |
+
"top_k": 50,
|
239 |
+
"top_p": 1.0,
|
240 |
+
"torch_dtype": null,
|
241 |
+
"torchscript": false,
|
242 |
+
"transformers_version": "4.29.0.dev0",
|
243 |
+
"typical_p": 1.0,
|
244 |
+
"use_abs_pos": true,
|
245 |
+
"use_bfloat16": false,
|
246 |
+
"use_rel_pos": true,
|
247 |
+
"window_size": 14
|
248 |
+
}
|
249 |
+
}
|
preprocessor_config.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"do_convert_rgb": true,
|
3 |
+
"do_normalize": true,
|
4 |
+
"do_pad": true,
|
5 |
+
"do_rescale": true,
|
6 |
+
"do_resize": true,
|
7 |
+
"image_mean": [
|
8 |
+
0.485,
|
9 |
+
0.456,
|
10 |
+
0.406
|
11 |
+
],
|
12 |
+
"image_processor_type": "SamImageProcessor",
|
13 |
+
"image_std": [
|
14 |
+
0.229,
|
15 |
+
0.224,
|
16 |
+
0.225
|
17 |
+
],
|
18 |
+
"pad_size": {
|
19 |
+
"height": 1024,
|
20 |
+
"width": 1024
|
21 |
+
},
|
22 |
+
"processor_class": "SamProcessor",
|
23 |
+
"resample": 2,
|
24 |
+
"rescale_factor": 0.00392156862745098,
|
25 |
+
"size": {
|
26 |
+
"longest_edge": 1024
|
27 |
+
}
|
28 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1a1e860feeb895bc46f704d4faad2a0be739b5dfdca0ebdda520ffbcfb73f348
|
3 |
+
size 375050165
|
tf_model.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cfaf0843e7c825c7262261782344a8ea64a6914766ff886bc967198ece733ed5
|
3 |
+
size 375292824
|