Add application file
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- CLIP/.gitignore +10 -0
- CLIP/CLIP.png +0 -0
- CLIP/LICENSE +22 -0
- CLIP/MANIFEST.in +1 -0
- CLIP/README.md +193 -0
- CLIP/clip/__init__.py +1 -0
- CLIP/clip/bpe_simple_vocab_16e6.txt.gz +0 -0
- CLIP/clip/clip.py +221 -0
- CLIP/clip/model.py +432 -0
- CLIP/clip/simple_tokenizer.py +132 -0
- CLIP/data/yfcc100m.md +14 -0
- CLIP/model-card.md +120 -0
- CLIP/notebooks/Interacting_with_CLIP.ipynb +0 -0
- CLIP/notebooks/Prompt_Engineering_for_ImageNet.ipynb +1188 -0
- CLIP/requirements.txt +5 -0
- CLIP/setup.py +21 -0
- CLIP/tests/test_consistency.py +25 -0
- app.py +4 -1
- requirements.txt +0 -9
- steps/temp.txt +0 -0
- taming-transformers/License.txt +19 -0
- taming-transformers/README.md +377 -0
- taming-transformers/assets/birddrawnbyachild.png +0 -0
- taming-transformers/assets/drin.jpg +0 -0
- taming-transformers/assets/faceshq.jpg +0 -0
- taming-transformers/assets/first_stage_mushrooms.png +0 -0
- taming-transformers/assets/first_stage_squirrels.png +0 -0
- taming-transformers/assets/imagenet.png +0 -0
- taming-transformers/assets/lake_in_the_mountains.png +0 -0
- taming-transformers/assets/mountain.jpeg +0 -0
- taming-transformers/assets/stormy.jpeg +0 -0
- taming-transformers/assets/sunset_and_ocean.jpg +0 -0
- taming-transformers/assets/teaser.png +0 -0
- taming-transformers/configs/coco_cond_stage.yaml +49 -0
- taming-transformers/configs/custom_vqgan.yaml +43 -0
- taming-transformers/configs/drin_transformer.yaml +77 -0
- taming-transformers/configs/faceshq_transformer.yaml +61 -0
- taming-transformers/configs/faceshq_vqgan.yaml +42 -0
- taming-transformers/configs/imagenet_vqgan.yaml +42 -0
- taming-transformers/configs/imagenetdepth_vqgan.yaml +41 -0
- taming-transformers/configs/sflckr_cond_stage.yaml +43 -0
- taming-transformers/data/ade20k_examples.txt +30 -0
- taming-transformers/data/ade20k_images/ADE_val_00000123.jpg +0 -0
- taming-transformers/data/ade20k_images/ADE_val_00000125.jpg +0 -0
- taming-transformers/data/ade20k_images/ADE_val_00000126.jpg +0 -0
- taming-transformers/data/ade20k_images/ADE_val_00000203.jpg +0 -0
- taming-transformers/data/ade20k_images/ADE_val_00000262.jpg +0 -0
- taming-transformers/data/ade20k_images/ADE_val_00000287.jpg +0 -0
- taming-transformers/data/ade20k_images/ADE_val_00000289.jpg +0 -0
- taming-transformers/data/ade20k_images/ADE_val_00000303.jpg +0 -0
CLIP/.gitignore
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
__pycache__/
|
2 |
+
*.py[cod]
|
3 |
+
*$py.class
|
4 |
+
*.egg-info
|
5 |
+
.pytest_cache
|
6 |
+
.ipynb_checkpoints
|
7 |
+
|
8 |
+
thumbs.db
|
9 |
+
.DS_Store
|
10 |
+
.idea
|
CLIP/CLIP.png
ADDED
CLIP/LICENSE
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) 2021 OpenAI
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE.
|
22 |
+
|
CLIP/MANIFEST.in
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
include clip/bpe_simple_vocab_16e6.txt.gz
|
CLIP/README.md
ADDED
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CLIP
|
2 |
+
|
3 |
+
[[Blog]](https://openai.com/blog/clip/) [[Paper]](https://arxiv.org/abs/2103.00020) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/notebooks/Interacting_with_CLIP.ipynb)
|
4 |
+
|
5 |
+
CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.
|
6 |
+
|
7 |
+
|
8 |
+
|
9 |
+
## Approach
|
10 |
+
|
11 |
+
![CLIP](CLIP.png)
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
## Usage
|
16 |
+
|
17 |
+
First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:
|
18 |
+
|
19 |
+
```bash
|
20 |
+
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
|
21 |
+
$ pip install ftfy regex tqdm
|
22 |
+
$ pip install git+https://github.com/openai/CLIP.git
|
23 |
+
```
|
24 |
+
|
25 |
+
Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU.
|
26 |
+
|
27 |
+
```python
|
28 |
+
import torch
|
29 |
+
import clip
|
30 |
+
from PIL import Image
|
31 |
+
|
32 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
33 |
+
model, preprocess = clip.load("ViT-B/32", device=device)
|
34 |
+
|
35 |
+
image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
|
36 |
+
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
|
37 |
+
|
38 |
+
with torch.no_grad():
|
39 |
+
image_features = model.encode_image(image)
|
40 |
+
text_features = model.encode_text(text)
|
41 |
+
|
42 |
+
logits_per_image, logits_per_text = model(image, text)
|
43 |
+
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
|
44 |
+
|
45 |
+
print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
|
46 |
+
```
|
47 |
+
|
48 |
+
|
49 |
+
## API
|
50 |
+
|
51 |
+
The CLIP module `clip` provides the following methods:
|
52 |
+
|
53 |
+
#### `clip.available_models()`
|
54 |
+
|
55 |
+
Returns the names of the available CLIP models.
|
56 |
+
|
57 |
+
#### `clip.load(name, device=..., jit=False)`
|
58 |
+
|
59 |
+
Returns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The `name` argument can also be a path to a local checkpoint.
|
60 |
+
|
61 |
+
The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When `jit` is `False`, a non-JIT version of the model will be loaded.
|
62 |
+
|
63 |
+
#### `clip.tokenize(text: Union[str, List[str]], context_length=77)`
|
64 |
+
|
65 |
+
Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model
|
66 |
+
|
67 |
+
---
|
68 |
+
|
69 |
+
The model returned by `clip.load()` supports the following methods:
|
70 |
+
|
71 |
+
#### `model.encode_image(image: Tensor)`
|
72 |
+
|
73 |
+
Given a batch of images, returns the image features encoded by the vision portion of the CLIP model.
|
74 |
+
|
75 |
+
#### `model.encode_text(text: Tensor)`
|
76 |
+
|
77 |
+
Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.
|
78 |
+
|
79 |
+
#### `model(image: Tensor, text: Tensor)`
|
80 |
+
|
81 |
+
Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
## More Examples
|
86 |
+
|
87 |
+
### Zero-Shot Prediction
|
88 |
+
|
89 |
+
The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset.
|
90 |
+
|
91 |
+
```python
|
92 |
+
import os
|
93 |
+
import clip
|
94 |
+
import torch
|
95 |
+
from torchvision.datasets import CIFAR100
|
96 |
+
|
97 |
+
# Load the model
|
98 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
99 |
+
model, preprocess = clip.load('ViT-B/32', device)
|
100 |
+
|
101 |
+
# Download the dataset
|
102 |
+
cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False)
|
103 |
+
|
104 |
+
# Prepare the inputs
|
105 |
+
image, class_id = cifar100[3637]
|
106 |
+
image_input = preprocess(image).unsqueeze(0).to(device)
|
107 |
+
text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device)
|
108 |
+
|
109 |
+
# Calculate features
|
110 |
+
with torch.no_grad():
|
111 |
+
image_features = model.encode_image(image_input)
|
112 |
+
text_features = model.encode_text(text_inputs)
|
113 |
+
|
114 |
+
# Pick the top 5 most similar labels for the image
|
115 |
+
image_features /= image_features.norm(dim=-1, keepdim=True)
|
116 |
+
text_features /= text_features.norm(dim=-1, keepdim=True)
|
117 |
+
similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
|
118 |
+
values, indices = similarity[0].topk(5)
|
119 |
+
|
120 |
+
# Print the result
|
121 |
+
print("\nTop predictions:\n")
|
122 |
+
for value, index in zip(values, indices):
|
123 |
+
print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%")
|
124 |
+
```
|
125 |
+
|
126 |
+
The output will look like the following (the exact numbers may be slightly different depending on the compute device):
|
127 |
+
|
128 |
+
```
|
129 |
+
Top predictions:
|
130 |
+
|
131 |
+
snake: 65.31%
|
132 |
+
turtle: 12.29%
|
133 |
+
sweet_pepper: 3.83%
|
134 |
+
lizard: 1.88%
|
135 |
+
crocodile: 1.75%
|
136 |
+
```
|
137 |
+
|
138 |
+
Note that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs.
|
139 |
+
|
140 |
+
|
141 |
+
### Linear-probe evaluation
|
142 |
+
|
143 |
+
The example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features.
|
144 |
+
|
145 |
+
```python
|
146 |
+
import os
|
147 |
+
import clip
|
148 |
+
import torch
|
149 |
+
|
150 |
+
import numpy as np
|
151 |
+
from sklearn.linear_model import LogisticRegression
|
152 |
+
from torch.utils.data import DataLoader
|
153 |
+
from torchvision.datasets import CIFAR100
|
154 |
+
from tqdm import tqdm
|
155 |
+
|
156 |
+
# Load the model
|
157 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
158 |
+
model, preprocess = clip.load('ViT-B/32', device)
|
159 |
+
|
160 |
+
# Load the dataset
|
161 |
+
root = os.path.expanduser("~/.cache")
|
162 |
+
train = CIFAR100(root, download=True, train=True, transform=preprocess)
|
163 |
+
test = CIFAR100(root, download=True, train=False, transform=preprocess)
|
164 |
+
|
165 |
+
|
166 |
+
def get_features(dataset):
|
167 |
+
all_features = []
|
168 |
+
all_labels = []
|
169 |
+
|
170 |
+
with torch.no_grad():
|
171 |
+
for images, labels in tqdm(DataLoader(dataset, batch_size=100)):
|
172 |
+
features = model.encode_image(images.to(device))
|
173 |
+
|
174 |
+
all_features.append(features)
|
175 |
+
all_labels.append(labels)
|
176 |
+
|
177 |
+
return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()
|
178 |
+
|
179 |
+
# Calculate the image features
|
180 |
+
train_features, train_labels = get_features(train)
|
181 |
+
test_features, test_labels = get_features(test)
|
182 |
+
|
183 |
+
# Perform logistic regression
|
184 |
+
classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)
|
185 |
+
classifier.fit(train_features, train_labels)
|
186 |
+
|
187 |
+
# Evaluate using the logistic regression classifier
|
188 |
+
predictions = classifier.predict(test_features)
|
189 |
+
accuracy = np.mean((test_labels == predictions).astype(np.float)) * 100.
|
190 |
+
print(f"Accuracy = {accuracy:.3f}")
|
191 |
+
```
|
192 |
+
|
193 |
+
Note that the `C` value should be determined via a hyperparameter sweep using a validation split.
|
CLIP/clip/__init__.py
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
from .clip import *
|
CLIP/clip/bpe_simple_vocab_16e6.txt.gz
ADDED
Binary file (1.36 MB). View file
|
|
CLIP/clip/clip.py
ADDED
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import hashlib
|
2 |
+
import os
|
3 |
+
import urllib
|
4 |
+
import warnings
|
5 |
+
from typing import Union, List
|
6 |
+
|
7 |
+
import torch
|
8 |
+
from PIL import Image
|
9 |
+
from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
|
10 |
+
from tqdm import tqdm
|
11 |
+
|
12 |
+
from .model import build_model
|
13 |
+
from .simple_tokenizer import SimpleTokenizer as _Tokenizer
|
14 |
+
|
15 |
+
try:
|
16 |
+
from torchvision.transforms import InterpolationMode
|
17 |
+
BICUBIC = InterpolationMode.BICUBIC
|
18 |
+
except ImportError:
|
19 |
+
BICUBIC = Image.BICUBIC
|
20 |
+
|
21 |
+
|
22 |
+
if torch.__version__.split(".") < ["1", "7", "1"]:
|
23 |
+
warnings.warn("PyTorch version 1.7.1 or higher is recommended")
|
24 |
+
|
25 |
+
|
26 |
+
__all__ = ["available_models", "load", "tokenize"]
|
27 |
+
_tokenizer = _Tokenizer()
|
28 |
+
|
29 |
+
_MODELS = {
|
30 |
+
"RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
|
31 |
+
"RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
|
32 |
+
"RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
|
33 |
+
"RN50x16": "https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt",
|
34 |
+
"ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
|
35 |
+
"ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
|
36 |
+
}
|
37 |
+
|
38 |
+
|
39 |
+
def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")):
|
40 |
+
os.makedirs(root, exist_ok=True)
|
41 |
+
filename = os.path.basename(url)
|
42 |
+
|
43 |
+
expected_sha256 = url.split("/")[-2]
|
44 |
+
download_target = os.path.join(root, filename)
|
45 |
+
|
46 |
+
if os.path.exists(download_target) and not os.path.isfile(download_target):
|
47 |
+
raise RuntimeError(f"{download_target} exists and is not a regular file")
|
48 |
+
|
49 |
+
if os.path.isfile(download_target):
|
50 |
+
if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
|
51 |
+
return download_target
|
52 |
+
else:
|
53 |
+
warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
|
54 |
+
|
55 |
+
with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
|
56 |
+
with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True) as loop:
|
57 |
+
while True:
|
58 |
+
buffer = source.read(8192)
|
59 |
+
if not buffer:
|
60 |
+
break
|
61 |
+
|
62 |
+
output.write(buffer)
|
63 |
+
loop.update(len(buffer))
|
64 |
+
|
65 |
+
if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
|
66 |
+
raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
|
67 |
+
|
68 |
+
return download_target
|
69 |
+
|
70 |
+
|
71 |
+
def _transform(n_px):
|
72 |
+
return Compose([
|
73 |
+
Resize(n_px, interpolation=BICUBIC),
|
74 |
+
CenterCrop(n_px),
|
75 |
+
lambda image: image.convert("RGB"),
|
76 |
+
ToTensor(),
|
77 |
+
Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
|
78 |
+
])
|
79 |
+
|
80 |
+
|
81 |
+
def available_models() -> List[str]:
|
82 |
+
"""Returns the names of available CLIP models"""
|
83 |
+
return list(_MODELS.keys())
|
84 |
+
|
85 |
+
|
86 |
+
def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=False):
|
87 |
+
"""Load a CLIP model
|
88 |
+
|
89 |
+
Parameters
|
90 |
+
----------
|
91 |
+
name : str
|
92 |
+
A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
|
93 |
+
|
94 |
+
device : Union[str, torch.device]
|
95 |
+
The device to put the loaded model
|
96 |
+
|
97 |
+
jit : bool
|
98 |
+
Whether to load the optimized JIT model or more hackable non-JIT model (default).
|
99 |
+
|
100 |
+
Returns
|
101 |
+
-------
|
102 |
+
model : torch.nn.Module
|
103 |
+
The CLIP model
|
104 |
+
|
105 |
+
preprocess : Callable[[PIL.Image], torch.Tensor]
|
106 |
+
A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
|
107 |
+
"""
|
108 |
+
if name in _MODELS:
|
109 |
+
model_path = _download(_MODELS[name])
|
110 |
+
elif os.path.isfile(name):
|
111 |
+
model_path = name
|
112 |
+
else:
|
113 |
+
raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
|
114 |
+
|
115 |
+
try:
|
116 |
+
# loading JIT archive
|
117 |
+
model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
|
118 |
+
state_dict = None
|
119 |
+
except RuntimeError:
|
120 |
+
# loading saved state dict
|
121 |
+
if jit:
|
122 |
+
warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
|
123 |
+
jit = False
|
124 |
+
state_dict = torch.load(model_path, map_location="cpu")
|
125 |
+
|
126 |
+
if not jit:
|
127 |
+
model = build_model(state_dict or model.state_dict()).to(device)
|
128 |
+
if str(device) == "cpu":
|
129 |
+
model.float()
|
130 |
+
return model, _transform(model.visual.input_resolution)
|
131 |
+
|
132 |
+
# patch the device names
|
133 |
+
device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
|
134 |
+
device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
|
135 |
+
|
136 |
+
def patch_device(module):
|
137 |
+
try:
|
138 |
+
graphs = [module.graph] if hasattr(module, "graph") else []
|
139 |
+
except RuntimeError:
|
140 |
+
graphs = []
|
141 |
+
|
142 |
+
if hasattr(module, "forward1"):
|
143 |
+
graphs.append(module.forward1.graph)
|
144 |
+
|
145 |
+
for graph in graphs:
|
146 |
+
for node in graph.findAllNodes("prim::Constant"):
|
147 |
+
if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
|
148 |
+
node.copyAttributes(device_node)
|
149 |
+
|
150 |
+
model.apply(patch_device)
|
151 |
+
patch_device(model.encode_image)
|
152 |
+
patch_device(model.encode_text)
|
153 |
+
|
154 |
+
# patch dtype to float32 on CPU
|
155 |
+
if str(device) == "cpu":
|
156 |
+
float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
|
157 |
+
float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
|
158 |
+
float_node = float_input.node()
|
159 |
+
|
160 |
+
def patch_float(module):
|
161 |
+
try:
|
162 |
+
graphs = [module.graph] if hasattr(module, "graph") else []
|
163 |
+
except RuntimeError:
|
164 |
+
graphs = []
|
165 |
+
|
166 |
+
if hasattr(module, "forward1"):
|
167 |
+
graphs.append(module.forward1.graph)
|
168 |
+
|
169 |
+
for graph in graphs:
|
170 |
+
for node in graph.findAllNodes("aten::to"):
|
171 |
+
inputs = list(node.inputs())
|
172 |
+
for i in [1, 2]: # dtype can be the second or third argument to aten::to()
|
173 |
+
if inputs[i].node()["value"] == 5:
|
174 |
+
inputs[i].node().copyAttributes(float_node)
|
175 |
+
|
176 |
+
model.apply(patch_float)
|
177 |
+
patch_float(model.encode_image)
|
178 |
+
patch_float(model.encode_text)
|
179 |
+
|
180 |
+
model.float()
|
181 |
+
|
182 |
+
return model, _transform(model.input_resolution.item())
|
183 |
+
|
184 |
+
|
185 |
+
def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> torch.LongTensor:
|
186 |
+
"""
|
187 |
+
Returns the tokenized representation of given input string(s)
|
188 |
+
|
189 |
+
Parameters
|
190 |
+
----------
|
191 |
+
texts : Union[str, List[str]]
|
192 |
+
An input string or a list of input strings to tokenize
|
193 |
+
|
194 |
+
context_length : int
|
195 |
+
The context length to use; all CLIP models use 77 as the context length
|
196 |
+
|
197 |
+
truncate: bool
|
198 |
+
Whether to truncate the text in case its encoding is longer than the context length
|
199 |
+
|
200 |
+
Returns
|
201 |
+
-------
|
202 |
+
A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
|
203 |
+
"""
|
204 |
+
if isinstance(texts, str):
|
205 |
+
texts = [texts]
|
206 |
+
|
207 |
+
sot_token = _tokenizer.encoder["<|startoftext|>"]
|
208 |
+
eot_token = _tokenizer.encoder["<|endoftext|>"]
|
209 |
+
all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
|
210 |
+
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
|
211 |
+
|
212 |
+
for i, tokens in enumerate(all_tokens):
|
213 |
+
if len(tokens) > context_length:
|
214 |
+
if truncate:
|
215 |
+
tokens = tokens[:context_length]
|
216 |
+
tokens[-1] = eot_token
|
217 |
+
else:
|
218 |
+
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
|
219 |
+
result[i, :len(tokens)] = torch.tensor(tokens)
|
220 |
+
|
221 |
+
return result
|
CLIP/clip/model.py
ADDED
@@ -0,0 +1,432 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from collections import OrderedDict
|
2 |
+
from typing import Tuple, Union
|
3 |
+
|
4 |
+
import numpy as np
|
5 |
+
import torch
|
6 |
+
import torch.nn.functional as F
|
7 |
+
from torch import nn
|
8 |
+
|
9 |
+
|
10 |
+
class Bottleneck(nn.Module):
|
11 |
+
expansion = 4
|
12 |
+
|
13 |
+
def __init__(self, inplanes, planes, stride=1):
|
14 |
+
super().__init__()
|
15 |
+
|
16 |
+
# all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
|
17 |
+
self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
|
18 |
+
self.bn1 = nn.BatchNorm2d(planes)
|
19 |
+
|
20 |
+
self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
|
21 |
+
self.bn2 = nn.BatchNorm2d(planes)
|
22 |
+
|
23 |
+
self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
|
24 |
+
|
25 |
+
self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
|
26 |
+
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
|
27 |
+
|
28 |
+
self.relu = nn.ReLU(inplace=True)
|
29 |
+
self.downsample = None
|
30 |
+
self.stride = stride
|
31 |
+
|
32 |
+
if stride > 1 or inplanes != planes * Bottleneck.expansion:
|
33 |
+
# downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
|
34 |
+
self.downsample = nn.Sequential(OrderedDict([
|
35 |
+
("-1", nn.AvgPool2d(stride)),
|
36 |
+
("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
|
37 |
+
("1", nn.BatchNorm2d(planes * self.expansion))
|
38 |
+
]))
|
39 |
+
|
40 |
+
def forward(self, x: torch.Tensor):
|
41 |
+
identity = x
|
42 |
+
|
43 |
+
out = self.relu(self.bn1(self.conv1(x)))
|
44 |
+
out = self.relu(self.bn2(self.conv2(out)))
|
45 |
+
out = self.avgpool(out)
|
46 |
+
out = self.bn3(self.conv3(out))
|
47 |
+
|
48 |
+
if self.downsample is not None:
|
49 |
+
identity = self.downsample(x)
|
50 |
+
|
51 |
+
out += identity
|
52 |
+
out = self.relu(out)
|
53 |
+
return out
|
54 |
+
|
55 |
+
|
56 |
+
class AttentionPool2d(nn.Module):
|
57 |
+
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
|
58 |
+
super().__init__()
|
59 |
+
self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
|
60 |
+
self.k_proj = nn.Linear(embed_dim, embed_dim)
|
61 |
+
self.q_proj = nn.Linear(embed_dim, embed_dim)
|
62 |
+
self.v_proj = nn.Linear(embed_dim, embed_dim)
|
63 |
+
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
|
64 |
+
self.num_heads = num_heads
|
65 |
+
|
66 |
+
def forward(self, x):
|
67 |
+
x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
|
68 |
+
x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
|
69 |
+
x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
|
70 |
+
x, _ = F.multi_head_attention_forward(
|
71 |
+
query=x, key=x, value=x,
|
72 |
+
embed_dim_to_check=x.shape[-1],
|
73 |
+
num_heads=self.num_heads,
|
74 |
+
q_proj_weight=self.q_proj.weight,
|
75 |
+
k_proj_weight=self.k_proj.weight,
|
76 |
+
v_proj_weight=self.v_proj.weight,
|
77 |
+
in_proj_weight=None,
|
78 |
+
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
|
79 |
+
bias_k=None,
|
80 |
+
bias_v=None,
|
81 |
+
add_zero_attn=False,
|
82 |
+
dropout_p=0,
|
83 |
+
out_proj_weight=self.c_proj.weight,
|
84 |
+
out_proj_bias=self.c_proj.bias,
|
85 |
+
use_separate_proj_weight=True,
|
86 |
+
training=self.training,
|
87 |
+
need_weights=False
|
88 |
+
)
|
89 |
+
|
90 |
+
return x[0]
|
91 |
+
|
92 |
+
|
93 |
+
class ModifiedResNet(nn.Module):
|
94 |
+
"""
|
95 |
+
A ResNet class that is similar to torchvision's but contains the following changes:
|
96 |
+
- There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
|
97 |
+
- Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
|
98 |
+
- The final pooling layer is a QKV attention instead of an average pool
|
99 |
+
"""
|
100 |
+
|
101 |
+
def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
|
102 |
+
super().__init__()
|
103 |
+
self.output_dim = output_dim
|
104 |
+
self.input_resolution = input_resolution
|
105 |
+
|
106 |
+
# the 3-layer stem
|
107 |
+
self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
|
108 |
+
self.bn1 = nn.BatchNorm2d(width // 2)
|
109 |
+
self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
|
110 |
+
self.bn2 = nn.BatchNorm2d(width // 2)
|
111 |
+
self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
|
112 |
+
self.bn3 = nn.BatchNorm2d(width)
|
113 |
+
self.avgpool = nn.AvgPool2d(2)
|
114 |
+
self.relu = nn.ReLU(inplace=True)
|
115 |
+
|
116 |
+
# residual layers
|
117 |
+
self._inplanes = width # this is a *mutable* variable used during construction
|
118 |
+
self.layer1 = self._make_layer(width, layers[0])
|
119 |
+
self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
|
120 |
+
self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
|
121 |
+
self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
|
122 |
+
|
123 |
+
embed_dim = width * 32 # the ResNet feature dimension
|
124 |
+
self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
|
125 |
+
|
126 |
+
def _make_layer(self, planes, blocks, stride=1):
|
127 |
+
layers = [Bottleneck(self._inplanes, planes, stride)]
|
128 |
+
|
129 |
+
self._inplanes = planes * Bottleneck.expansion
|
130 |
+
for _ in range(1, blocks):
|
131 |
+
layers.append(Bottleneck(self._inplanes, planes))
|
132 |
+
|
133 |
+
return nn.Sequential(*layers)
|
134 |
+
|
135 |
+
def forward(self, x):
|
136 |
+
def stem(x):
|
137 |
+
for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
|
138 |
+
x = self.relu(bn(conv(x)))
|
139 |
+
x = self.avgpool(x)
|
140 |
+
return x
|
141 |
+
|
142 |
+
x = x.type(self.conv1.weight.dtype)
|
143 |
+
x = stem(x)
|
144 |
+
x = self.layer1(x)
|
145 |
+
x = self.layer2(x)
|
146 |
+
x = self.layer3(x)
|
147 |
+
x = self.layer4(x)
|
148 |
+
x = self.attnpool(x)
|
149 |
+
|
150 |
+
return x
|
151 |
+
|
152 |
+
|
153 |
+
class LayerNorm(nn.LayerNorm):
|
154 |
+
"""Subclass torch's LayerNorm to handle fp16."""
|
155 |
+
|
156 |
+
def forward(self, x: torch.Tensor):
|
157 |
+
orig_type = x.dtype
|
158 |
+
ret = super().forward(x.type(torch.float32))
|
159 |
+
return ret.type(orig_type)
|
160 |
+
|
161 |
+
|
162 |
+
class QuickGELU(nn.Module):
|
163 |
+
def forward(self, x: torch.Tensor):
|
164 |
+
return x * torch.sigmoid(1.702 * x)
|
165 |
+
|
166 |
+
|
167 |
+
class ResidualAttentionBlock(nn.Module):
|
168 |
+
def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
|
169 |
+
super().__init__()
|
170 |
+
|
171 |
+
self.attn = nn.MultiheadAttention(d_model, n_head)
|
172 |
+
self.ln_1 = LayerNorm(d_model)
|
173 |
+
self.mlp = nn.Sequential(OrderedDict([
|
174 |
+
("c_fc", nn.Linear(d_model, d_model * 4)),
|
175 |
+
("gelu", QuickGELU()),
|
176 |
+
("c_proj", nn.Linear(d_model * 4, d_model))
|
177 |
+
]))
|
178 |
+
self.ln_2 = LayerNorm(d_model)
|
179 |
+
self.attn_mask = attn_mask
|
180 |
+
|
181 |
+
def attention(self, x: torch.Tensor):
|
182 |
+
self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
|
183 |
+
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
|
184 |
+
|
185 |
+
def forward(self, x: torch.Tensor):
|
186 |
+
x = x + self.attention(self.ln_1(x))
|
187 |
+
x = x + self.mlp(self.ln_2(x))
|
188 |
+
return x
|
189 |
+
|
190 |
+
|
191 |
+
class Transformer(nn.Module):
|
192 |
+
def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
|
193 |
+
super().__init__()
|
194 |
+
self.width = width
|
195 |
+
self.layers = layers
|
196 |
+
self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
|
197 |
+
|
198 |
+
def forward(self, x: torch.Tensor):
|
199 |
+
return self.resblocks(x)
|
200 |
+
|
201 |
+
|
202 |
+
class VisionTransformer(nn.Module):
|
203 |
+
def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int):
|
204 |
+
super().__init__()
|
205 |
+
self.input_resolution = input_resolution
|
206 |
+
self.output_dim = output_dim
|
207 |
+
self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
|
208 |
+
|
209 |
+
scale = width ** -0.5
|
210 |
+
self.class_embedding = nn.Parameter(scale * torch.randn(width))
|
211 |
+
self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
|
212 |
+
self.ln_pre = LayerNorm(width)
|
213 |
+
|
214 |
+
self.transformer = Transformer(width, layers, heads)
|
215 |
+
|
216 |
+
self.ln_post = LayerNorm(width)
|
217 |
+
self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
|
218 |
+
|
219 |
+
def forward(self, x: torch.Tensor):
|
220 |
+
x = self.conv1(x) # shape = [*, width, grid, grid]
|
221 |
+
x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
|
222 |
+
x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
|
223 |
+
x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
|
224 |
+
x = x + self.positional_embedding.to(x.dtype)
|
225 |
+
x = self.ln_pre(x)
|
226 |
+
|
227 |
+
x = x.permute(1, 0, 2) # NLD -> LND
|
228 |
+
x = self.transformer(x)
|
229 |
+
x = x.permute(1, 0, 2) # LND -> NLD
|
230 |
+
|
231 |
+
x = self.ln_post(x[:, 0, :])
|
232 |
+
|
233 |
+
if self.proj is not None:
|
234 |
+
x = x @ self.proj
|
235 |
+
|
236 |
+
return x
|
237 |
+
|
238 |
+
|
239 |
+
class CLIP(nn.Module):
|
240 |
+
def __init__(self,
|
241 |
+
embed_dim: int,
|
242 |
+
# vision
|
243 |
+
image_resolution: int,
|
244 |
+
vision_layers: Union[Tuple[int, int, int, int], int],
|
245 |
+
vision_width: int,
|
246 |
+
vision_patch_size: int,
|
247 |
+
# text
|
248 |
+
context_length: int,
|
249 |
+
vocab_size: int,
|
250 |
+
transformer_width: int,
|
251 |
+
transformer_heads: int,
|
252 |
+
transformer_layers: int
|
253 |
+
):
|
254 |
+
super().__init__()
|
255 |
+
|
256 |
+
self.context_length = context_length
|
257 |
+
|
258 |
+
if isinstance(vision_layers, (tuple, list)):
|
259 |
+
vision_heads = vision_width * 32 // 64
|
260 |
+
self.visual = ModifiedResNet(
|
261 |
+
layers=vision_layers,
|
262 |
+
output_dim=embed_dim,
|
263 |
+
heads=vision_heads,
|
264 |
+
input_resolution=image_resolution,
|
265 |
+
width=vision_width
|
266 |
+
)
|
267 |
+
else:
|
268 |
+
vision_heads = vision_width // 64
|
269 |
+
self.visual = VisionTransformer(
|
270 |
+
input_resolution=image_resolution,
|
271 |
+
patch_size=vision_patch_size,
|
272 |
+
width=vision_width,
|
273 |
+
layers=vision_layers,
|
274 |
+
heads=vision_heads,
|
275 |
+
output_dim=embed_dim
|
276 |
+
)
|
277 |
+
|
278 |
+
self.transformer = Transformer(
|
279 |
+
width=transformer_width,
|
280 |
+
layers=transformer_layers,
|
281 |
+
heads=transformer_heads,
|
282 |
+
attn_mask=self.build_attention_mask()
|
283 |
+
)
|
284 |
+
|
285 |
+
self.vocab_size = vocab_size
|
286 |
+
self.token_embedding = nn.Embedding(vocab_size, transformer_width)
|
287 |
+
self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
|
288 |
+
self.ln_final = LayerNorm(transformer_width)
|
289 |
+
|
290 |
+
self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
|
291 |
+
self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
|
292 |
+
|
293 |
+
self.initialize_parameters()
|
294 |
+
|
295 |
+
def initialize_parameters(self):
|
296 |
+
nn.init.normal_(self.token_embedding.weight, std=0.02)
|
297 |
+
nn.init.normal_(self.positional_embedding, std=0.01)
|
298 |
+
|
299 |
+
if isinstance(self.visual, ModifiedResNet):
|
300 |
+
if self.visual.attnpool is not None:
|
301 |
+
std = self.visual.attnpool.c_proj.in_features ** -0.5
|
302 |
+
nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
|
303 |
+
nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
|
304 |
+
nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
|
305 |
+
nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
|
306 |
+
|
307 |
+
for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
|
308 |
+
for name, param in resnet_block.named_parameters():
|
309 |
+
if name.endswith("bn3.weight"):
|
310 |
+
nn.init.zeros_(param)
|
311 |
+
|
312 |
+
proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
|
313 |
+
attn_std = self.transformer.width ** -0.5
|
314 |
+
fc_std = (2 * self.transformer.width) ** -0.5
|
315 |
+
for block in self.transformer.resblocks:
|
316 |
+
nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
|
317 |
+
nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
|
318 |
+
nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
|
319 |
+
nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
|
320 |
+
|
321 |
+
if self.text_projection is not None:
|
322 |
+
nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
|
323 |
+
|
324 |
+
def build_attention_mask(self):
|
325 |
+
# lazily create causal attention mask, with full attention between the vision tokens
|
326 |
+
# pytorch uses additive attention mask; fill with -inf
|
327 |
+
mask = torch.empty(self.context_length, self.context_length)
|
328 |
+
mask.fill_(float("-inf"))
|
329 |
+
mask.triu_(1) # zero out the lower diagonal
|
330 |
+
return mask
|
331 |
+
|
332 |
+
@property
|
333 |
+
def dtype(self):
|
334 |
+
return self.visual.conv1.weight.dtype
|
335 |
+
|
336 |
+
def encode_image(self, image):
|
337 |
+
return self.visual(image.type(self.dtype))
|
338 |
+
|
339 |
+
def encode_text(self, text):
|
340 |
+
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
|
341 |
+
|
342 |
+
x = x + self.positional_embedding.type(self.dtype)
|
343 |
+
x = x.permute(1, 0, 2) # NLD -> LND
|
344 |
+
x = self.transformer(x)
|
345 |
+
x = x.permute(1, 0, 2) # LND -> NLD
|
346 |
+
x = self.ln_final(x).type(self.dtype)
|
347 |
+
|
348 |
+
# x.shape = [batch_size, n_ctx, transformer.width]
|
349 |
+
# take features from the eot embedding (eot_token is the highest number in each sequence)
|
350 |
+
x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
|
351 |
+
|
352 |
+
return x
|
353 |
+
|
354 |
+
def forward(self, image, text):
|
355 |
+
image_features = self.encode_image(image)
|
356 |
+
text_features = self.encode_text(text)
|
357 |
+
|
358 |
+
# normalized features
|
359 |
+
image_features = image_features / image_features.norm(dim=-1, keepdim=True)
|
360 |
+
text_features = text_features / text_features.norm(dim=-1, keepdim=True)
|
361 |
+
|
362 |
+
# cosine similarity as logits
|
363 |
+
logit_scale = self.logit_scale.exp()
|
364 |
+
logits_per_image = logit_scale * image_features @ text_features.t()
|
365 |
+
logits_per_text = logit_scale * text_features @ image_features.t()
|
366 |
+
|
367 |
+
# shape = [global_batch_size, global_batch_size]
|
368 |
+
return logits_per_image, logits_per_text
|
369 |
+
|
370 |
+
|
371 |
+
def convert_weights(model: nn.Module):
|
372 |
+
"""Convert applicable model parameters to fp16"""
|
373 |
+
|
374 |
+
def _convert_weights_to_fp16(l):
|
375 |
+
if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
|
376 |
+
l.weight.data = l.weight.data.half()
|
377 |
+
if l.bias is not None:
|
378 |
+
l.bias.data = l.bias.data.half()
|
379 |
+
|
380 |
+
if isinstance(l, nn.MultiheadAttention):
|
381 |
+
for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
|
382 |
+
tensor = getattr(l, attr)
|
383 |
+
if tensor is not None:
|
384 |
+
tensor.data = tensor.data.half()
|
385 |
+
|
386 |
+
for name in ["text_projection", "proj"]:
|
387 |
+
if hasattr(l, name):
|
388 |
+
attr = getattr(l, name)
|
389 |
+
if attr is not None:
|
390 |
+
attr.data = attr.data.half()
|
391 |
+
|
392 |
+
model.apply(_convert_weights_to_fp16)
|
393 |
+
|
394 |
+
|
395 |
+
def build_model(state_dict: dict):
|
396 |
+
vit = "visual.proj" in state_dict
|
397 |
+
|
398 |
+
if vit:
|
399 |
+
vision_width = state_dict["visual.conv1.weight"].shape[0]
|
400 |
+
vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
|
401 |
+
vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
|
402 |
+
grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
|
403 |
+
image_resolution = vision_patch_size * grid_size
|
404 |
+
else:
|
405 |
+
counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]]
|
406 |
+
vision_layers = tuple(counts)
|
407 |
+
vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
|
408 |
+
output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
|
409 |
+
vision_patch_size = None
|
410 |
+
assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
|
411 |
+
image_resolution = output_width * 32
|
412 |
+
|
413 |
+
embed_dim = state_dict["text_projection"].shape[1]
|
414 |
+
context_length = state_dict["positional_embedding"].shape[0]
|
415 |
+
vocab_size = state_dict["token_embedding.weight"].shape[0]
|
416 |
+
transformer_width = state_dict["ln_final.weight"].shape[0]
|
417 |
+
transformer_heads = transformer_width // 64
|
418 |
+
transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks")))
|
419 |
+
|
420 |
+
model = CLIP(
|
421 |
+
embed_dim,
|
422 |
+
image_resolution, vision_layers, vision_width, vision_patch_size,
|
423 |
+
context_length, vocab_size, transformer_width, transformer_heads, transformer_layers
|
424 |
+
)
|
425 |
+
|
426 |
+
for key in ["input_resolution", "context_length", "vocab_size"]:
|
427 |
+
if key in state_dict:
|
428 |
+
del state_dict[key]
|
429 |
+
|
430 |
+
convert_weights(model)
|
431 |
+
model.load_state_dict(state_dict)
|
432 |
+
return model.eval()
|
CLIP/clip/simple_tokenizer.py
ADDED
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gzip
|
2 |
+
import html
|
3 |
+
import os
|
4 |
+
from functools import lru_cache
|
5 |
+
|
6 |
+
import ftfy
|
7 |
+
import regex as re
|
8 |
+
|
9 |
+
|
10 |
+
@lru_cache()
|
11 |
+
def default_bpe():
|
12 |
+
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
|
13 |
+
|
14 |
+
|
15 |
+
@lru_cache()
|
16 |
+
def bytes_to_unicode():
|
17 |
+
"""
|
18 |
+
Returns list of utf-8 byte and a corresponding list of unicode strings.
|
19 |
+
The reversible bpe codes work on unicode strings.
|
20 |
+
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
|
21 |
+
When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
|
22 |
+
This is a signficant percentage of your normal, say, 32K bpe vocab.
|
23 |
+
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
|
24 |
+
And avoids mapping to whitespace/control characters the bpe code barfs on.
|
25 |
+
"""
|
26 |
+
bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
|
27 |
+
cs = bs[:]
|
28 |
+
n = 0
|
29 |
+
for b in range(2**8):
|
30 |
+
if b not in bs:
|
31 |
+
bs.append(b)
|
32 |
+
cs.append(2**8+n)
|
33 |
+
n += 1
|
34 |
+
cs = [chr(n) for n in cs]
|
35 |
+
return dict(zip(bs, cs))
|
36 |
+
|
37 |
+
|
38 |
+
def get_pairs(word):
|
39 |
+
"""Return set of symbol pairs in a word.
|
40 |
+
Word is represented as tuple of symbols (symbols being variable-length strings).
|
41 |
+
"""
|
42 |
+
pairs = set()
|
43 |
+
prev_char = word[0]
|
44 |
+
for char in word[1:]:
|
45 |
+
pairs.add((prev_char, char))
|
46 |
+
prev_char = char
|
47 |
+
return pairs
|
48 |
+
|
49 |
+
|
50 |
+
def basic_clean(text):
|
51 |
+
text = ftfy.fix_text(text)
|
52 |
+
text = html.unescape(html.unescape(text))
|
53 |
+
return text.strip()
|
54 |
+
|
55 |
+
|
56 |
+
def whitespace_clean(text):
|
57 |
+
text = re.sub(r'\s+', ' ', text)
|
58 |
+
text = text.strip()
|
59 |
+
return text
|
60 |
+
|
61 |
+
|
62 |
+
class SimpleTokenizer(object):
|
63 |
+
def __init__(self, bpe_path: str = default_bpe()):
|
64 |
+
self.byte_encoder = bytes_to_unicode()
|
65 |
+
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
|
66 |
+
merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
|
67 |
+
merges = merges[1:49152-256-2+1]
|
68 |
+
merges = [tuple(merge.split()) for merge in merges]
|
69 |
+
vocab = list(bytes_to_unicode().values())
|
70 |
+
vocab = vocab + [v+'</w>' for v in vocab]
|
71 |
+
for merge in merges:
|
72 |
+
vocab.append(''.join(merge))
|
73 |
+
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
|
74 |
+
self.encoder = dict(zip(vocab, range(len(vocab))))
|
75 |
+
self.decoder = {v: k for k, v in self.encoder.items()}
|
76 |
+
self.bpe_ranks = dict(zip(merges, range(len(merges))))
|
77 |
+
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
|
78 |
+
self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
|
79 |
+
|
80 |
+
def bpe(self, token):
|
81 |
+
if token in self.cache:
|
82 |
+
return self.cache[token]
|
83 |
+
word = tuple(token[:-1]) + ( token[-1] + '</w>',)
|
84 |
+
pairs = get_pairs(word)
|
85 |
+
|
86 |
+
if not pairs:
|
87 |
+
return token+'</w>'
|
88 |
+
|
89 |
+
while True:
|
90 |
+
bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
|
91 |
+
if bigram not in self.bpe_ranks:
|
92 |
+
break
|
93 |
+
first, second = bigram
|
94 |
+
new_word = []
|
95 |
+
i = 0
|
96 |
+
while i < len(word):
|
97 |
+
try:
|
98 |
+
j = word.index(first, i)
|
99 |
+
new_word.extend(word[i:j])
|
100 |
+
i = j
|
101 |
+
except:
|
102 |
+
new_word.extend(word[i:])
|
103 |
+
break
|
104 |
+
|
105 |
+
if word[i] == first and i < len(word)-1 and word[i+1] == second:
|
106 |
+
new_word.append(first+second)
|
107 |
+
i += 2
|
108 |
+
else:
|
109 |
+
new_word.append(word[i])
|
110 |
+
i += 1
|
111 |
+
new_word = tuple(new_word)
|
112 |
+
word = new_word
|
113 |
+
if len(word) == 1:
|
114 |
+
break
|
115 |
+
else:
|
116 |
+
pairs = get_pairs(word)
|
117 |
+
word = ' '.join(word)
|
118 |
+
self.cache[token] = word
|
119 |
+
return word
|
120 |
+
|
121 |
+
def encode(self, text):
|
122 |
+
bpe_tokens = []
|
123 |
+
text = whitespace_clean(basic_clean(text)).lower()
|
124 |
+
for token in re.findall(self.pat, text):
|
125 |
+
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
|
126 |
+
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
|
127 |
+
return bpe_tokens
|
128 |
+
|
129 |
+
def decode(self, tokens):
|
130 |
+
text = ''.join([self.decoder[token] for token in tokens])
|
131 |
+
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
|
132 |
+
return text
|
CLIP/data/yfcc100m.md
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# The YFCC100M Subset
|
2 |
+
|
3 |
+
In the paper, we performed a dataset ablation using a subset of the YFCC100M dataset and showed that the performance remained largely similar.
|
4 |
+
|
5 |
+
The subset contains 14,829,396 images, about 15% of the full dataset, which have been filtered to only keep those with natural languag titles and/or descriptions in English.
|
6 |
+
|
7 |
+
We provide the list of (line number, photo identifier, photo hash) of each image contained in this subset. These correspond to the first three columns in the dataset's metadata TSV file.
|
8 |
+
|
9 |
+
```
|
10 |
+
wget https://openaipublic.azureedge.net/clip/data/yfcc100m_subset_data.tsv.bz2
|
11 |
+
bunzip2 yfcc100m_subset_data.tsv.bz2
|
12 |
+
```
|
13 |
+
|
14 |
+
Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/).
|
CLIP/model-card.md
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Card: CLIP
|
2 |
+
|
3 |
+
Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model.
|
4 |
+
|
5 |
+
## Model Details
|
6 |
+
|
7 |
+
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
|
8 |
+
|
9 |
+
### Model Date
|
10 |
+
|
11 |
+
January 2021
|
12 |
+
|
13 |
+
### Model Type
|
14 |
+
|
15 |
+
The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.
|
16 |
+
|
17 |
+
### Model Versions
|
18 |
+
|
19 |
+
Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.
|
20 |
+
|
21 |
+
As part of the staged release process, we have also released the RN101 model, as well as RN50x4, a RN50 scaled up 4x according to the [EfficientNet](https://arxiv.org/abs/1905.11946) scaling rule. In July 2021, we additionally released the RN50x16 and ViT-B/16 models.
|
22 |
+
|
23 |
+
Please see the paper linked below for further details about their specification.
|
24 |
+
|
25 |
+
### Documents
|
26 |
+
|
27 |
+
- [Blog Post](https://openai.com/blog/clip/)
|
28 |
+
- [CLIP Paper](https://arxiv.org/abs/2103.00020)
|
29 |
+
|
30 |
+
|
31 |
+
|
32 |
+
## Model Use
|
33 |
+
|
34 |
+
### Intended Use
|
35 |
+
|
36 |
+
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
|
37 |
+
|
38 |
+
#### Primary intended uses
|
39 |
+
|
40 |
+
The primary intended users of these models are AI researchers.
|
41 |
+
|
42 |
+
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
|
43 |
+
|
44 |
+
### Out-of-Scope Use Cases
|
45 |
+
|
46 |
+
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
|
47 |
+
|
48 |
+
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
|
49 |
+
|
50 |
+
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
|
51 |
+
|
52 |
+
|
53 |
+
|
54 |
+
## Data
|
55 |
+
|
56 |
+
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
|
57 |
+
|
58 |
+
### Data Mission Statement
|
59 |
+
|
60 |
+
Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
|
61 |
+
|
62 |
+
|
63 |
+
|
64 |
+
## Performance and Limitations
|
65 |
+
|
66 |
+
### Performance
|
67 |
+
|
68 |
+
We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
|
69 |
+
|
70 |
+
- Food101
|
71 |
+
- CIFAR10
|
72 |
+
- CIFAR100
|
73 |
+
- Birdsnap
|
74 |
+
- SUN397
|
75 |
+
- Stanford Cars
|
76 |
+
- FGVC Aircraft
|
77 |
+
- VOC2007
|
78 |
+
- DTD
|
79 |
+
- Oxford-IIIT Pet dataset
|
80 |
+
- Caltech101
|
81 |
+
- Flowers102
|
82 |
+
- MNIST
|
83 |
+
- SVHN
|
84 |
+
- IIIT5K
|
85 |
+
- Hateful Memes
|
86 |
+
- SST-2
|
87 |
+
- UCF101
|
88 |
+
- Kinetics700
|
89 |
+
- Country211
|
90 |
+
- CLEVR Counting
|
91 |
+
- KITTI Distance
|
92 |
+
- STL-10
|
93 |
+
- RareAct
|
94 |
+
- Flickr30
|
95 |
+
- MSCOCO
|
96 |
+
- ImageNet
|
97 |
+
- ImageNet-A
|
98 |
+
- ImageNet-R
|
99 |
+
- ImageNet Sketch
|
100 |
+
- ObjectNet (ImageNet Overlap)
|
101 |
+
- Youtube-BB
|
102 |
+
- ImageNet-Vid
|
103 |
+
|
104 |
+
## Limitations
|
105 |
+
|
106 |
+
CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
|
107 |
+
|
108 |
+
### Bias and Fairness
|
109 |
+
|
110 |
+
We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
|
111 |
+
|
112 |
+
We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
## Feedback
|
117 |
+
|
118 |
+
### Where to send questions or comments about the model
|
119 |
+
|
120 |
+
Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
|
CLIP/notebooks/Interacting_with_CLIP.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
CLIP/notebooks/Prompt_Engineering_for_ImageNet.ipynb
ADDED
@@ -0,0 +1,1188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"nbformat": 4,
|
3 |
+
"nbformat_minor": 0,
|
4 |
+
"metadata": {
|
5 |
+
"colab": {
|
6 |
+
"name": "Prompt Engineering for ImageNet.ipynb",
|
7 |
+
"provenance": [],
|
8 |
+
"collapsed_sections": []
|
9 |
+
},
|
10 |
+
"kernelspec": {
|
11 |
+
"name": "python3",
|
12 |
+
"display_name": "Python 3"
|
13 |
+
},
|
14 |
+
"accelerator": "GPU",
|
15 |
+
"widgets": {
|
16 |
+
"application/vnd.jupyter.widget-state+json": {
|
17 |
+
"4e3a3f83649f45f8bef3434980634664": {
|
18 |
+
"model_module": "@jupyter-widgets/controls",
|
19 |
+
"model_name": "HBoxModel",
|
20 |
+
"state": {
|
21 |
+
"_view_name": "HBoxView",
|
22 |
+
"_dom_classes": [],
|
23 |
+
"_model_name": "HBoxModel",
|
24 |
+
"_view_module": "@jupyter-widgets/controls",
|
25 |
+
"_model_module_version": "1.5.0",
|
26 |
+
"_view_count": null,
|
27 |
+
"_view_module_version": "1.5.0",
|
28 |
+
"box_style": "",
|
29 |
+
"layout": "IPY_MODEL_f066bdb766664c788ba1e9de8d311e22",
|
30 |
+
"_model_module": "@jupyter-widgets/controls",
|
31 |
+
"children": [
|
32 |
+
"IPY_MODEL_4e7a7427d28a4ae684e0be4548eb9944",
|
33 |
+
"IPY_MODEL_cc9dc019c1334a46b2558ffa6c0dd6e6"
|
34 |
+
]
|
35 |
+
}
|
36 |
+
},
|
37 |
+
"f066bdb766664c788ba1e9de8d311e22": {
|
38 |
+
"model_module": "@jupyter-widgets/base",
|
39 |
+
"model_name": "LayoutModel",
|
40 |
+
"state": {
|
41 |
+
"_view_name": "LayoutView",
|
42 |
+
"grid_template_rows": null,
|
43 |
+
"right": null,
|
44 |
+
"justify_content": null,
|
45 |
+
"_view_module": "@jupyter-widgets/base",
|
46 |
+
"overflow": null,
|
47 |
+
"_model_module_version": "1.2.0",
|
48 |
+
"_view_count": null,
|
49 |
+
"flex_flow": null,
|
50 |
+
"width": null,
|
51 |
+
"min_width": null,
|
52 |
+
"border": null,
|
53 |
+
"align_items": null,
|
54 |
+
"bottom": null,
|
55 |
+
"_model_module": "@jupyter-widgets/base",
|
56 |
+
"top": null,
|
57 |
+
"grid_column": null,
|
58 |
+
"overflow_y": null,
|
59 |
+
"overflow_x": null,
|
60 |
+
"grid_auto_flow": null,
|
61 |
+
"grid_area": null,
|
62 |
+
"grid_template_columns": null,
|
63 |
+
"flex": null,
|
64 |
+
"_model_name": "LayoutModel",
|
65 |
+
"justify_items": null,
|
66 |
+
"grid_row": null,
|
67 |
+
"max_height": null,
|
68 |
+
"align_content": null,
|
69 |
+
"visibility": null,
|
70 |
+
"align_self": null,
|
71 |
+
"height": null,
|
72 |
+
"min_height": null,
|
73 |
+
"padding": null,
|
74 |
+
"grid_auto_rows": null,
|
75 |
+
"grid_gap": null,
|
76 |
+
"max_width": null,
|
77 |
+
"order": null,
|
78 |
+
"_view_module_version": "1.2.0",
|
79 |
+
"grid_template_areas": null,
|
80 |
+
"object_position": null,
|
81 |
+
"object_fit": null,
|
82 |
+
"grid_auto_columns": null,
|
83 |
+
"margin": null,
|
84 |
+
"display": null,
|
85 |
+
"left": null
|
86 |
+
}
|
87 |
+
},
|
88 |
+
"4e7a7427d28a4ae684e0be4548eb9944": {
|
89 |
+
"model_module": "@jupyter-widgets/controls",
|
90 |
+
"model_name": "FloatProgressModel",
|
91 |
+
"state": {
|
92 |
+
"_view_name": "ProgressView",
|
93 |
+
"style": "IPY_MODEL_285c877d4f644f3a8a58c4eb5948101c",
|
94 |
+
"_dom_classes": [],
|
95 |
+
"description": "100%",
|
96 |
+
"_model_name": "FloatProgressModel",
|
97 |
+
"bar_style": "success",
|
98 |
+
"max": 1000,
|
99 |
+
"_view_module": "@jupyter-widgets/controls",
|
100 |
+
"_model_module_version": "1.5.0",
|
101 |
+
"value": 1000,
|
102 |
+
"_view_count": null,
|
103 |
+
"_view_module_version": "1.5.0",
|
104 |
+
"orientation": "horizontal",
|
105 |
+
"min": 0,
|
106 |
+
"description_tooltip": null,
|
107 |
+
"_model_module": "@jupyter-widgets/controls",
|
108 |
+
"layout": "IPY_MODEL_075d6545e02e419ca565589eb5ffc318"
|
109 |
+
}
|
110 |
+
},
|
111 |
+
"cc9dc019c1334a46b2558ffa6c0dd6e6": {
|
112 |
+
"model_module": "@jupyter-widgets/controls",
|
113 |
+
"model_name": "HTMLModel",
|
114 |
+
"state": {
|
115 |
+
"_view_name": "HTMLView",
|
116 |
+
"style": "IPY_MODEL_53f9106c80e84d5b8c3ec96162d1db98",
|
117 |
+
"_dom_classes": [],
|
118 |
+
"description": "",
|
119 |
+
"_model_name": "HTMLModel",
|
120 |
+
"placeholder": "",
|
121 |
+
"_view_module": "@jupyter-widgets/controls",
|
122 |
+
"_model_module_version": "1.5.0",
|
123 |
+
"value": " 1000/1000 [01:09<00:00, 14.35it/s]",
|
124 |
+
"_view_count": null,
|
125 |
+
"_view_module_version": "1.5.0",
|
126 |
+
"description_tooltip": null,
|
127 |
+
"_model_module": "@jupyter-widgets/controls",
|
128 |
+
"layout": "IPY_MODEL_19c57d99e7c44cbda508ce558fde435d"
|
129 |
+
}
|
130 |
+
},
|
131 |
+
"285c877d4f644f3a8a58c4eb5948101c": {
|
132 |
+
"model_module": "@jupyter-widgets/controls",
|
133 |
+
"model_name": "ProgressStyleModel",
|
134 |
+
"state": {
|
135 |
+
"_view_name": "StyleView",
|
136 |
+
"_model_name": "ProgressStyleModel",
|
137 |
+
"description_width": "initial",
|
138 |
+
"_view_module": "@jupyter-widgets/base",
|
139 |
+
"_model_module_version": "1.5.0",
|
140 |
+
"_view_count": null,
|
141 |
+
"_view_module_version": "1.2.0",
|
142 |
+
"bar_color": null,
|
143 |
+
"_model_module": "@jupyter-widgets/controls"
|
144 |
+
}
|
145 |
+
},
|
146 |
+
"075d6545e02e419ca565589eb5ffc318": {
|
147 |
+
"model_module": "@jupyter-widgets/base",
|
148 |
+
"model_name": "LayoutModel",
|
149 |
+
"state": {
|
150 |
+
"_view_name": "LayoutView",
|
151 |
+
"grid_template_rows": null,
|
152 |
+
"right": null,
|
153 |
+
"justify_content": null,
|
154 |
+
"_view_module": "@jupyter-widgets/base",
|
155 |
+
"overflow": null,
|
156 |
+
"_model_module_version": "1.2.0",
|
157 |
+
"_view_count": null,
|
158 |
+
"flex_flow": null,
|
159 |
+
"width": null,
|
160 |
+
"min_width": null,
|
161 |
+
"border": null,
|
162 |
+
"align_items": null,
|
163 |
+
"bottom": null,
|
164 |
+
"_model_module": "@jupyter-widgets/base",
|
165 |
+
"top": null,
|
166 |
+
"grid_column": null,
|
167 |
+
"overflow_y": null,
|
168 |
+
"overflow_x": null,
|
169 |
+
"grid_auto_flow": null,
|
170 |
+
"grid_area": null,
|
171 |
+
"grid_template_columns": null,
|
172 |
+
"flex": null,
|
173 |
+
"_model_name": "LayoutModel",
|
174 |
+
"justify_items": null,
|
175 |
+
"grid_row": null,
|
176 |
+
"max_height": null,
|
177 |
+
"align_content": null,
|
178 |
+
"visibility": null,
|
179 |
+
"align_self": null,
|
180 |
+
"height": null,
|
181 |
+
"min_height": null,
|
182 |
+
"padding": null,
|
183 |
+
"grid_auto_rows": null,
|
184 |
+
"grid_gap": null,
|
185 |
+
"max_width": null,
|
186 |
+
"order": null,
|
187 |
+
"_view_module_version": "1.2.0",
|
188 |
+
"grid_template_areas": null,
|
189 |
+
"object_position": null,
|
190 |
+
"object_fit": null,
|
191 |
+
"grid_auto_columns": null,
|
192 |
+
"margin": null,
|
193 |
+
"display": null,
|
194 |
+
"left": null
|
195 |
+
}
|
196 |
+
},
|
197 |
+
"53f9106c80e84d5b8c3ec96162d1db98": {
|
198 |
+
"model_module": "@jupyter-widgets/controls",
|
199 |
+
"model_name": "DescriptionStyleModel",
|
200 |
+
"state": {
|
201 |
+
"_view_name": "StyleView",
|
202 |
+
"_model_name": "DescriptionStyleModel",
|
203 |
+
"description_width": "",
|
204 |
+
"_view_module": "@jupyter-widgets/base",
|
205 |
+
"_model_module_version": "1.5.0",
|
206 |
+
"_view_count": null,
|
207 |
+
"_view_module_version": "1.2.0",
|
208 |
+
"_model_module": "@jupyter-widgets/controls"
|
209 |
+
}
|
210 |
+
},
|
211 |
+
"19c57d99e7c44cbda508ce558fde435d": {
|
212 |
+
"model_module": "@jupyter-widgets/base",
|
213 |
+
"model_name": "LayoutModel",
|
214 |
+
"state": {
|
215 |
+
"_view_name": "LayoutView",
|
216 |
+
"grid_template_rows": null,
|
217 |
+
"right": null,
|
218 |
+
"justify_content": null,
|
219 |
+
"_view_module": "@jupyter-widgets/base",
|
220 |
+
"overflow": null,
|
221 |
+
"_model_module_version": "1.2.0",
|
222 |
+
"_view_count": null,
|
223 |
+
"flex_flow": null,
|
224 |
+
"width": null,
|
225 |
+
"min_width": null,
|
226 |
+
"border": null,
|
227 |
+
"align_items": null,
|
228 |
+
"bottom": null,
|
229 |
+
"_model_module": "@jupyter-widgets/base",
|
230 |
+
"top": null,
|
231 |
+
"grid_column": null,
|
232 |
+
"overflow_y": null,
|
233 |
+
"overflow_x": null,
|
234 |
+
"grid_auto_flow": null,
|
235 |
+
"grid_area": null,
|
236 |
+
"grid_template_columns": null,
|
237 |
+
"flex": null,
|
238 |
+
"_model_name": "LayoutModel",
|
239 |
+
"justify_items": null,
|
240 |
+
"grid_row": null,
|
241 |
+
"max_height": null,
|
242 |
+
"align_content": null,
|
243 |
+
"visibility": null,
|
244 |
+
"align_self": null,
|
245 |
+
"height": null,
|
246 |
+
"min_height": null,
|
247 |
+
"padding": null,
|
248 |
+
"grid_auto_rows": null,
|
249 |
+
"grid_gap": null,
|
250 |
+
"max_width": null,
|
251 |
+
"order": null,
|
252 |
+
"_view_module_version": "1.2.0",
|
253 |
+
"grid_template_areas": null,
|
254 |
+
"object_position": null,
|
255 |
+
"object_fit": null,
|
256 |
+
"grid_auto_columns": null,
|
257 |
+
"margin": null,
|
258 |
+
"display": null,
|
259 |
+
"left": null
|
260 |
+
}
|
261 |
+
},
|
262 |
+
"fbb2b937b22049f5987f39f48c652a86": {
|
263 |
+
"model_module": "@jupyter-widgets/controls",
|
264 |
+
"model_name": "HBoxModel",
|
265 |
+
"state": {
|
266 |
+
"_view_name": "HBoxView",
|
267 |
+
"_dom_classes": [],
|
268 |
+
"_model_name": "HBoxModel",
|
269 |
+
"_view_module": "@jupyter-widgets/controls",
|
270 |
+
"_model_module_version": "1.5.0",
|
271 |
+
"_view_count": null,
|
272 |
+
"_view_module_version": "1.5.0",
|
273 |
+
"box_style": "",
|
274 |
+
"layout": "IPY_MODEL_0a1b6b76984349ccb36ca2fc4a4a0208",
|
275 |
+
"_model_module": "@jupyter-widgets/controls",
|
276 |
+
"children": [
|
277 |
+
"IPY_MODEL_c136afb47aa14ac2832093ee415c6f3e",
|
278 |
+
"IPY_MODEL_467a151e73744eccb199fe72aa352e5b"
|
279 |
+
]
|
280 |
+
}
|
281 |
+
},
|
282 |
+
"0a1b6b76984349ccb36ca2fc4a4a0208": {
|
283 |
+
"model_module": "@jupyter-widgets/base",
|
284 |
+
"model_name": "LayoutModel",
|
285 |
+
"state": {
|
286 |
+
"_view_name": "LayoutView",
|
287 |
+
"grid_template_rows": null,
|
288 |
+
"right": null,
|
289 |
+
"justify_content": null,
|
290 |
+
"_view_module": "@jupyter-widgets/base",
|
291 |
+
"overflow": null,
|
292 |
+
"_model_module_version": "1.2.0",
|
293 |
+
"_view_count": null,
|
294 |
+
"flex_flow": null,
|
295 |
+
"width": null,
|
296 |
+
"min_width": null,
|
297 |
+
"border": null,
|
298 |
+
"align_items": null,
|
299 |
+
"bottom": null,
|
300 |
+
"_model_module": "@jupyter-widgets/base",
|
301 |
+
"top": null,
|
302 |
+
"grid_column": null,
|
303 |
+
"overflow_y": null,
|
304 |
+
"overflow_x": null,
|
305 |
+
"grid_auto_flow": null,
|
306 |
+
"grid_area": null,
|
307 |
+
"grid_template_columns": null,
|
308 |
+
"flex": null,
|
309 |
+
"_model_name": "LayoutModel",
|
310 |
+
"justify_items": null,
|
311 |
+
"grid_row": null,
|
312 |
+
"max_height": null,
|
313 |
+
"align_content": null,
|
314 |
+
"visibility": null,
|
315 |
+
"align_self": null,
|
316 |
+
"height": null,
|
317 |
+
"min_height": null,
|
318 |
+
"padding": null,
|
319 |
+
"grid_auto_rows": null,
|
320 |
+
"grid_gap": null,
|
321 |
+
"max_width": null,
|
322 |
+
"order": null,
|
323 |
+
"_view_module_version": "1.2.0",
|
324 |
+
"grid_template_areas": null,
|
325 |
+
"object_position": null,
|
326 |
+
"object_fit": null,
|
327 |
+
"grid_auto_columns": null,
|
328 |
+
"margin": null,
|
329 |
+
"display": null,
|
330 |
+
"left": null
|
331 |
+
}
|
332 |
+
},
|
333 |
+
"c136afb47aa14ac2832093ee415c6f3e": {
|
334 |
+
"model_module": "@jupyter-widgets/controls",
|
335 |
+
"model_name": "FloatProgressModel",
|
336 |
+
"state": {
|
337 |
+
"_view_name": "ProgressView",
|
338 |
+
"style": "IPY_MODEL_f6d637c3fc3c46928d023441227130e5",
|
339 |
+
"_dom_classes": [],
|
340 |
+
"description": "100%",
|
341 |
+
"_model_name": "FloatProgressModel",
|
342 |
+
"bar_style": "success",
|
343 |
+
"max": 313,
|
344 |
+
"_view_module": "@jupyter-widgets/controls",
|
345 |
+
"_model_module_version": "1.5.0",
|
346 |
+
"value": 313,
|
347 |
+
"_view_count": null,
|
348 |
+
"_view_module_version": "1.5.0",
|
349 |
+
"orientation": "horizontal",
|
350 |
+
"min": 0,
|
351 |
+
"description_tooltip": null,
|
352 |
+
"_model_module": "@jupyter-widgets/controls",
|
353 |
+
"layout": "IPY_MODEL_029e6eadacb8480193aab52ff073be8f"
|
354 |
+
}
|
355 |
+
},
|
356 |
+
"467a151e73744eccb199fe72aa352e5b": {
|
357 |
+
"model_module": "@jupyter-widgets/controls",
|
358 |
+
"model_name": "HTMLModel",
|
359 |
+
"state": {
|
360 |
+
"_view_name": "HTMLView",
|
361 |
+
"style": "IPY_MODEL_30178355f76742898d37966b3875ef0a",
|
362 |
+
"_dom_classes": [],
|
363 |
+
"description": "",
|
364 |
+
"_model_name": "HTMLModel",
|
365 |
+
"placeholder": "",
|
366 |
+
"_view_module": "@jupyter-widgets/controls",
|
367 |
+
"_model_module_version": "1.5.0",
|
368 |
+
"value": " 313/313 [01:26<00:00, 3.62it/s]",
|
369 |
+
"_view_count": null,
|
370 |
+
"_view_module_version": "1.5.0",
|
371 |
+
"description_tooltip": null,
|
372 |
+
"_model_module": "@jupyter-widgets/controls",
|
373 |
+
"layout": "IPY_MODEL_2e62544c03d64d6d92b94fcfaca2fc90"
|
374 |
+
}
|
375 |
+
},
|
376 |
+
"f6d637c3fc3c46928d023441227130e5": {
|
377 |
+
"model_module": "@jupyter-widgets/controls",
|
378 |
+
"model_name": "ProgressStyleModel",
|
379 |
+
"state": {
|
380 |
+
"_view_name": "StyleView",
|
381 |
+
"_model_name": "ProgressStyleModel",
|
382 |
+
"description_width": "initial",
|
383 |
+
"_view_module": "@jupyter-widgets/base",
|
384 |
+
"_model_module_version": "1.5.0",
|
385 |
+
"_view_count": null,
|
386 |
+
"_view_module_version": "1.2.0",
|
387 |
+
"bar_color": null,
|
388 |
+
"_model_module": "@jupyter-widgets/controls"
|
389 |
+
}
|
390 |
+
},
|
391 |
+
"029e6eadacb8480193aab52ff073be8f": {
|
392 |
+
"model_module": "@jupyter-widgets/base",
|
393 |
+
"model_name": "LayoutModel",
|
394 |
+
"state": {
|
395 |
+
"_view_name": "LayoutView",
|
396 |
+
"grid_template_rows": null,
|
397 |
+
"right": null,
|
398 |
+
"justify_content": null,
|
399 |
+
"_view_module": "@jupyter-widgets/base",
|
400 |
+
"overflow": null,
|
401 |
+
"_model_module_version": "1.2.0",
|
402 |
+
"_view_count": null,
|
403 |
+
"flex_flow": null,
|
404 |
+
"width": null,
|
405 |
+
"min_width": null,
|
406 |
+
"border": null,
|
407 |
+
"align_items": null,
|
408 |
+
"bottom": null,
|
409 |
+
"_model_module": "@jupyter-widgets/base",
|
410 |
+
"top": null,
|
411 |
+
"grid_column": null,
|
412 |
+
"overflow_y": null,
|
413 |
+
"overflow_x": null,
|
414 |
+
"grid_auto_flow": null,
|
415 |
+
"grid_area": null,
|
416 |
+
"grid_template_columns": null,
|
417 |
+
"flex": null,
|
418 |
+
"_model_name": "LayoutModel",
|
419 |
+
"justify_items": null,
|
420 |
+
"grid_row": null,
|
421 |
+
"max_height": null,
|
422 |
+
"align_content": null,
|
423 |
+
"visibility": null,
|
424 |
+
"align_self": null,
|
425 |
+
"height": null,
|
426 |
+
"min_height": null,
|
427 |
+
"padding": null,
|
428 |
+
"grid_auto_rows": null,
|
429 |
+
"grid_gap": null,
|
430 |
+
"max_width": null,
|
431 |
+
"order": null,
|
432 |
+
"_view_module_version": "1.2.0",
|
433 |
+
"grid_template_areas": null,
|
434 |
+
"object_position": null,
|
435 |
+
"object_fit": null,
|
436 |
+
"grid_auto_columns": null,
|
437 |
+
"margin": null,
|
438 |
+
"display": null,
|
439 |
+
"left": null
|
440 |
+
}
|
441 |
+
},
|
442 |
+
"30178355f76742898d37966b3875ef0a": {
|
443 |
+
"model_module": "@jupyter-widgets/controls",
|
444 |
+
"model_name": "DescriptionStyleModel",
|
445 |
+
"state": {
|
446 |
+
"_view_name": "StyleView",
|
447 |
+
"_model_name": "DescriptionStyleModel",
|
448 |
+
"description_width": "",
|
449 |
+
"_view_module": "@jupyter-widgets/base",
|
450 |
+
"_model_module_version": "1.5.0",
|
451 |
+
"_view_count": null,
|
452 |
+
"_view_module_version": "1.2.0",
|
453 |
+
"_model_module": "@jupyter-widgets/controls"
|
454 |
+
}
|
455 |
+
},
|
456 |
+
"2e62544c03d64d6d92b94fcfaca2fc90": {
|
457 |
+
"model_module": "@jupyter-widgets/base",
|
458 |
+
"model_name": "LayoutModel",
|
459 |
+
"state": {
|
460 |
+
"_view_name": "LayoutView",
|
461 |
+
"grid_template_rows": null,
|
462 |
+
"right": null,
|
463 |
+
"justify_content": null,
|
464 |
+
"_view_module": "@jupyter-widgets/base",
|
465 |
+
"overflow": null,
|
466 |
+
"_model_module_version": "1.2.0",
|
467 |
+
"_view_count": null,
|
468 |
+
"flex_flow": null,
|
469 |
+
"width": null,
|
470 |
+
"min_width": null,
|
471 |
+
"border": null,
|
472 |
+
"align_items": null,
|
473 |
+
"bottom": null,
|
474 |
+
"_model_module": "@jupyter-widgets/base",
|
475 |
+
"top": null,
|
476 |
+
"grid_column": null,
|
477 |
+
"overflow_y": null,
|
478 |
+
"overflow_x": null,
|
479 |
+
"grid_auto_flow": null,
|
480 |
+
"grid_area": null,
|
481 |
+
"grid_template_columns": null,
|
482 |
+
"flex": null,
|
483 |
+
"_model_name": "LayoutModel",
|
484 |
+
"justify_items": null,
|
485 |
+
"grid_row": null,
|
486 |
+
"max_height": null,
|
487 |
+
"align_content": null,
|
488 |
+
"visibility": null,
|
489 |
+
"align_self": null,
|
490 |
+
"height": null,
|
491 |
+
"min_height": null,
|
492 |
+
"padding": null,
|
493 |
+
"grid_auto_rows": null,
|
494 |
+
"grid_gap": null,
|
495 |
+
"max_width": null,
|
496 |
+
"order": null,
|
497 |
+
"_view_module_version": "1.2.0",
|
498 |
+
"grid_template_areas": null,
|
499 |
+
"object_position": null,
|
500 |
+
"object_fit": null,
|
501 |
+
"grid_auto_columns": null,
|
502 |
+
"margin": null,
|
503 |
+
"display": null,
|
504 |
+
"left": null
|
505 |
+
}
|
506 |
+
}
|
507 |
+
}
|
508 |
+
}
|
509 |
+
},
|
510 |
+
"cells": [
|
511 |
+
{
|
512 |
+
"cell_type": "markdown",
|
513 |
+
"metadata": {
|
514 |
+
"id": "53N4k0pj_9qL"
|
515 |
+
},
|
516 |
+
"source": [
|
517 |
+
"# Preparation for Colab\n",
|
518 |
+
"\n",
|
519 |
+
"Make sure you're running a GPU runtime; if not, select \"GPU\" as the hardware accelerator in Runtime > Change Runtime Type in the menu. The next cells will print the CUDA version of the runtime if it has a GPU, and install PyTorch 1.7.1."
|
520 |
+
]
|
521 |
+
},
|
522 |
+
{
|
523 |
+
"cell_type": "code",
|
524 |
+
"metadata": {
|
525 |
+
"colab": {
|
526 |
+
"base_uri": "https://localhost:8080/"
|
527 |
+
},
|
528 |
+
"id": "0BpdJkdBssk9",
|
529 |
+
"outputId": "dc75b5f9-17c7-4856-ac79-8047fa609500"
|
530 |
+
},
|
531 |
+
"source": [
|
532 |
+
"import subprocess\n",
|
533 |
+
"\n",
|
534 |
+
"CUDA_version = [s for s in subprocess.check_output([\"nvcc\", \"--version\"]).decode(\"UTF-8\").split(\", \") if s.startswith(\"release\")][0].split(\" \")[-1]\n",
|
535 |
+
"print(\"CUDA version:\", CUDA_version)\n",
|
536 |
+
"\n",
|
537 |
+
"if CUDA_version == \"10.0\":\n",
|
538 |
+
" torch_version_suffix = \"+cu100\"\n",
|
539 |
+
"elif CUDA_version == \"10.1\":\n",
|
540 |
+
" torch_version_suffix = \"+cu101\"\n",
|
541 |
+
"elif CUDA_version == \"10.2\":\n",
|
542 |
+
" torch_version_suffix = \"\"\n",
|
543 |
+
"else:\n",
|
544 |
+
" torch_version_suffix = \"+cu110\""
|
545 |
+
],
|
546 |
+
"execution_count": 1,
|
547 |
+
"outputs": [
|
548 |
+
{
|
549 |
+
"output_type": "stream",
|
550 |
+
"text": [
|
551 |
+
"CUDA version: 10.1\n"
|
552 |
+
],
|
553 |
+
"name": "stdout"
|
554 |
+
}
|
555 |
+
]
|
556 |
+
},
|
557 |
+
{
|
558 |
+
"cell_type": "code",
|
559 |
+
"metadata": {
|
560 |
+
"colab": {
|
561 |
+
"base_uri": "https://localhost:8080/"
|
562 |
+
},
|
563 |
+
"id": "RBVr18E5tse8",
|
564 |
+
"outputId": "404230c1-0f78-451d-8816-19d4109d579e"
|
565 |
+
},
|
566 |
+
"source": [
|
567 |
+
"! pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex"
|
568 |
+
],
|
569 |
+
"execution_count": 2,
|
570 |
+
"outputs": [
|
571 |
+
{
|
572 |
+
"output_type": "stream",
|
573 |
+
"text": [
|
574 |
+
"Looking in links: https://download.pytorch.org/whl/torch_stable.html\n",
|
575 |
+
"Collecting torch==1.7.1+cu101\n",
|
576 |
+
"\u001b[?25l Downloading https://download.pytorch.org/whl/cu101/torch-1.7.1%2Bcu101-cp36-cp36m-linux_x86_64.whl (735.4MB)\n",
|
577 |
+
"\u001b[K |████████████████████████████████| 735.4MB 25kB/s \n",
|
578 |
+
"\u001b[?25hCollecting torchvision==0.8.2+cu101\n",
|
579 |
+
"\u001b[?25l Downloading https://download.pytorch.org/whl/cu101/torchvision-0.8.2%2Bcu101-cp36-cp36m-linux_x86_64.whl (12.8MB)\n",
|
580 |
+
"\u001b[K |████████████████████████████████| 12.8MB 248kB/s \n",
|
581 |
+
"\u001b[?25hCollecting ftfy\n",
|
582 |
+
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/ff/e2/3b51c53dffb1e52d9210ebc01f1fb9f2f6eba9b3201fa971fd3946643c71/ftfy-5.8.tar.gz (64kB)\n",
|
583 |
+
"\u001b[K |████████████████████████████████| 71kB 5.6MB/s \n",
|
584 |
+
"\u001b[?25hRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (2019.12.20)\n",
|
585 |
+
"Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (3.7.4.3)\n",
|
586 |
+
"Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (1.19.5)\n",
|
587 |
+
"Requirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (0.8)\n",
|
588 |
+
"Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.8.2+cu101) (7.0.0)\n",
|
589 |
+
"Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy) (0.2.5)\n",
|
590 |
+
"Building wheels for collected packages: ftfy\n",
|
591 |
+
" Building wheel for ftfy (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
592 |
+
" Created wheel for ftfy: filename=ftfy-5.8-cp36-none-any.whl size=45613 sha256=73a94b51b7fe03350783d5b9dd638801a904c618d3b0dc7237ce77f401f33404\n",
|
593 |
+
" Stored in directory: /root/.cache/pip/wheels/ba/c0/ef/f28c4da5ac84a4e06ac256ca9182fc34fa57fefffdbc68425b\n",
|
594 |
+
"Successfully built ftfy\n",
|
595 |
+
"Installing collected packages: torch, torchvision, ftfy\n",
|
596 |
+
" Found existing installation: torch 1.7.0+cu101\n",
|
597 |
+
" Uninstalling torch-1.7.0+cu101:\n",
|
598 |
+
" Successfully uninstalled torch-1.7.0+cu101\n",
|
599 |
+
" Found existing installation: torchvision 0.8.1+cu101\n",
|
600 |
+
" Uninstalling torchvision-0.8.1+cu101:\n",
|
601 |
+
" Successfully uninstalled torchvision-0.8.1+cu101\n",
|
602 |
+
"Successfully installed ftfy-5.8 torch-1.7.1+cu101 torchvision-0.8.2+cu101\n"
|
603 |
+
],
|
604 |
+
"name": "stdout"
|
605 |
+
}
|
606 |
+
]
|
607 |
+
},
|
608 |
+
{
|
609 |
+
"cell_type": "markdown",
|
610 |
+
"metadata": {
|
611 |
+
"id": "zGm7TwfbDLgu"
|
612 |
+
},
|
613 |
+
"source": [
|
614 |
+
"The following command installs the `clip` module from its source:"
|
615 |
+
]
|
616 |
+
},
|
617 |
+
{
|
618 |
+
"cell_type": "code",
|
619 |
+
"metadata": {
|
620 |
+
"colab": {
|
621 |
+
"base_uri": "https://localhost:8080/"
|
622 |
+
},
|
623 |
+
"id": "QAFjXlGdEMQM",
|
624 |
+
"outputId": "859da71b-00c8-44d1-84d0-7965c20411b4"
|
625 |
+
},
|
626 |
+
"source": [
|
627 |
+
"! pip install git+https://github.com/openai/CLIP.git"
|
628 |
+
],
|
629 |
+
"execution_count": 3,
|
630 |
+
"outputs": [
|
631 |
+
{
|
632 |
+
"output_type": "stream",
|
633 |
+
"text": [
|
634 |
+
"Collecting git+https://github.com/openai/CLIP.git\n",
|
635 |
+
" Cloning https://github.com/openai/CLIP.git to /tmp/pip-req-build-ewapt31c\n",
|
636 |
+
" Running command git clone -q https://github.com/openai/CLIP.git /tmp/pip-req-build-ewapt31c\n",
|
637 |
+
"Requirement already satisfied: ftfy in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (5.8)\n",
|
638 |
+
"Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (2019.12.20)\n",
|
639 |
+
"Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (4.41.1)\n",
|
640 |
+
"Requirement already satisfied: torch~=1.7.1 in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (1.7.1+cu101)\n",
|
641 |
+
"Requirement already satisfied: torchvision~=0.8.2 in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (0.8.2+cu101)\n",
|
642 |
+
"Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy->clip==1.0) (0.2.5)\n",
|
643 |
+
"Requirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (0.8)\n",
|
644 |
+
"Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (3.7.4.3)\n",
|
645 |
+
"Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (1.19.5)\n",
|
646 |
+
"Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision~=0.8.2->clip==1.0) (7.0.0)\n",
|
647 |
+
"Building wheels for collected packages: clip\n",
|
648 |
+
" Building wheel for clip (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
649 |
+
" Created wheel for clip: filename=clip-1.0-cp36-none-any.whl size=1367993 sha256=1839a2f0b015f75579b578ebfa15bcbe8ebab1ff535127c9357c5b26f8473de3\n",
|
650 |
+
" Stored in directory: /tmp/pip-ephem-wheel-cache-jwymwzm4/wheels/79/51/d7/69f91d37121befe21d9c52332e04f592e17d1cabc7319b3e09\n",
|
651 |
+
"Successfully built clip\n",
|
652 |
+
"Installing collected packages: clip\n",
|
653 |
+
"Successfully installed clip-1.0\n"
|
654 |
+
],
|
655 |
+
"name": "stdout"
|
656 |
+
}
|
657 |
+
]
|
658 |
+
},
|
659 |
+
{
|
660 |
+
"cell_type": "code",
|
661 |
+
"metadata": {
|
662 |
+
"id": "C1hkDT38hSaP",
|
663 |
+
"colab": {
|
664 |
+
"base_uri": "https://localhost:8080/"
|
665 |
+
},
|
666 |
+
"outputId": "6cd33e12-aed4-4950-e32f-6f1113eb3ade"
|
667 |
+
},
|
668 |
+
"source": [
|
669 |
+
"import numpy as np\n",
|
670 |
+
"import torch\n",
|
671 |
+
"import clip\n",
|
672 |
+
"from tqdm.notebook import tqdm\n",
|
673 |
+
"\n",
|
674 |
+
"print(\"Torch version:\", torch.__version__)"
|
675 |
+
],
|
676 |
+
"execution_count": 4,
|
677 |
+
"outputs": [
|
678 |
+
{
|
679 |
+
"output_type": "stream",
|
680 |
+
"text": [
|
681 |
+
"Torch version: 1.7.1+cu101\n"
|
682 |
+
],
|
683 |
+
"name": "stdout"
|
684 |
+
}
|
685 |
+
]
|
686 |
+
},
|
687 |
+
{
|
688 |
+
"cell_type": "markdown",
|
689 |
+
"metadata": {
|
690 |
+
"id": "eFxgLV5HAEEw"
|
691 |
+
},
|
692 |
+
"source": [
|
693 |
+
"# Loading the model\n",
|
694 |
+
"\n",
|
695 |
+
"Download and instantiate a CLIP model using the `clip` module that we just installed."
|
696 |
+
]
|
697 |
+
},
|
698 |
+
{
|
699 |
+
"cell_type": "code",
|
700 |
+
"metadata": {
|
701 |
+
"id": "uLFS29hnhlY4",
|
702 |
+
"colab": {
|
703 |
+
"base_uri": "https://localhost:8080/"
|
704 |
+
},
|
705 |
+
"outputId": "3148f942-0226-42a3-e5d8-4b9bc6c7c4f8"
|
706 |
+
},
|
707 |
+
"source": [
|
708 |
+
"clip.available_models()"
|
709 |
+
],
|
710 |
+
"execution_count": 5,
|
711 |
+
"outputs": [
|
712 |
+
{
|
713 |
+
"output_type": "execute_result",
|
714 |
+
"data": {
|
715 |
+
"text/plain": [
|
716 |
+
"['RN50', 'ViT-B/32']"
|
717 |
+
]
|
718 |
+
},
|
719 |
+
"metadata": {
|
720 |
+
"tags": []
|
721 |
+
},
|
722 |
+
"execution_count": 5
|
723 |
+
}
|
724 |
+
]
|
725 |
+
},
|
726 |
+
{
|
727 |
+
"cell_type": "code",
|
728 |
+
"metadata": {
|
729 |
+
"id": "cboKZocQlSYX",
|
730 |
+
"colab": {
|
731 |
+
"base_uri": "https://localhost:8080/"
|
732 |
+
},
|
733 |
+
"outputId": "58e644d4-6e23-43b5-964e-1e9e8540d22e"
|
734 |
+
},
|
735 |
+
"source": [
|
736 |
+
"model, preprocess = clip.load(\"ViT-B/32\")"
|
737 |
+
],
|
738 |
+
"execution_count": 6,
|
739 |
+
"outputs": [
|
740 |
+
{
|
741 |
+
"output_type": "stream",
|
742 |
+
"text": [
|
743 |
+
"100%|██████████████████████| 353976522/353976522 [00:01<00:00, 188872424.30it/s]\n"
|
744 |
+
],
|
745 |
+
"name": "stderr"
|
746 |
+
}
|
747 |
+
]
|
748 |
+
},
|
749 |
+
{
|
750 |
+
"cell_type": "code",
|
751 |
+
"metadata": {
|
752 |
+
"colab": {
|
753 |
+
"base_uri": "https://localhost:8080/"
|
754 |
+
},
|
755 |
+
"id": "IBRVTY9lbGm8",
|
756 |
+
"outputId": "58641dc2-919d-40ae-b71a-7b7b47830f77"
|
757 |
+
},
|
758 |
+
"source": [
|
759 |
+
"input_resolution = model.input_resolution.item()\n",
|
760 |
+
"context_length = model.context_length.item()\n",
|
761 |
+
"vocab_size = model.vocab_size.item()\n",
|
762 |
+
"\n",
|
763 |
+
"print(\"Model parameters:\", f\"{np.sum([int(np.prod(p.shape)) for p in model.parameters()]):,}\")\n",
|
764 |
+
"print(\"Input resolution:\", input_resolution)\n",
|
765 |
+
"print(\"Context length:\", context_length)\n",
|
766 |
+
"print(\"Vocab size:\", vocab_size)"
|
767 |
+
],
|
768 |
+
"execution_count": 7,
|
769 |
+
"outputs": [
|
770 |
+
{
|
771 |
+
"output_type": "stream",
|
772 |
+
"text": [
|
773 |
+
"Model parameters: 151,277,313\n",
|
774 |
+
"Input resolution: 224\n",
|
775 |
+
"Context length: 77\n",
|
776 |
+
"Vocab size: 49408\n"
|
777 |
+
],
|
778 |
+
"name": "stdout"
|
779 |
+
}
|
780 |
+
]
|
781 |
+
},
|
782 |
+
{
|
783 |
+
"cell_type": "markdown",
|
784 |
+
"metadata": {
|
785 |
+
"id": "LhO3OtOmF8M4"
|
786 |
+
},
|
787 |
+
"source": [
|
788 |
+
"# Preparing ImageNet labels and prompts\n",
|
789 |
+
"\n",
|
790 |
+
"The following cell contains the 1,000 labels for the ImageNet dataset, followed by the text templates we'll use as \"prompt engineering\"."
|
791 |
+
]
|
792 |
+
},
|
793 |
+
{
|
794 |
+
"cell_type": "code",
|
795 |
+
"metadata": {
|
796 |
+
"id": "R2HbOZrqa0jF"
|
797 |
+
},
|
798 |
+
"source": [
|
799 |
+
"imagenet_classes = [\"tench\", \"goldfish\", \"great white shark\", \"tiger shark\", \"hammerhead shark\", \"electric ray\", \"stingray\", \"rooster\", \"hen\", \"ostrich\", \"brambling\", \"goldfinch\", \"house finch\", \"junco\", \"indigo bunting\", \"American robin\", \"bulbul\", \"jay\", \"magpie\", \"chickadee\", \"American dipper\", \"kite (bird of prey)\", \"bald eagle\", \"vulture\", \"great grey owl\", \"fire salamander\", \"smooth newt\", \"newt\", \"spotted salamander\", \"axolotl\", \"American bullfrog\", \"tree frog\", \"tailed frog\", \"loggerhead sea turtle\", \"leatherback sea turtle\", \"mud turtle\", \"terrapin\", \"box turtle\", \"banded gecko\", \"green iguana\", \"Carolina anole\", \"desert grassland whiptail lizard\", \"agama\", \"frilled-necked lizard\", \"alligator lizard\", \"Gila monster\", \"European green lizard\", \"chameleon\", \"Komodo dragon\", \"Nile crocodile\", \"American alligator\", \"triceratops\", \"worm snake\", \"ring-necked snake\", \"eastern hog-nosed snake\", \"smooth green snake\", \"kingsnake\", \"garter snake\", \"water snake\", \"vine snake\", \"night snake\", \"boa constrictor\", \"African rock python\", \"Indian cobra\", \"green mamba\", \"sea snake\", \"Saharan horned viper\", \"eastern diamondback rattlesnake\", \"sidewinder rattlesnake\", \"trilobite\", \"harvestman\", \"scorpion\", \"yellow garden spider\", \"barn spider\", \"European garden spider\", \"southern black widow\", \"tarantula\", \"wolf spider\", \"tick\", \"centipede\", \"black grouse\", \"ptarmigan\", \"ruffed grouse\", \"prairie grouse\", \"peafowl\", \"quail\", \"partridge\", \"african grey parrot\", \"macaw\", \"sulphur-crested cockatoo\", \"lorikeet\", \"coucal\", \"bee eater\", \"hornbill\", \"hummingbird\", \"jacamar\", \"toucan\", \"duck\", \"red-breasted merganser\", \"goose\", \"black swan\", \"tusker\", \"echidna\", \"platypus\", \"wallaby\", \"koala\", \"wombat\", \"jellyfish\", \"sea anemone\", \"brain coral\", \"flatworm\", \"nematode\", \"conch\", \"snail\", \"slug\", \"sea slug\", \"chiton\", \"chambered nautilus\", \"Dungeness crab\", \"rock crab\", \"fiddler crab\", \"red king crab\", \"American lobster\", \"spiny lobster\", \"crayfish\", \"hermit crab\", \"isopod\", \"white stork\", \"black stork\", \"spoonbill\", \"flamingo\", \"little blue heron\", \"great egret\", \"bittern bird\", \"crane bird\", \"limpkin\", \"common gallinule\", \"American coot\", \"bustard\", \"ruddy turnstone\", \"dunlin\", \"common redshank\", \"dowitcher\", \"oystercatcher\", \"pelican\", \"king penguin\", \"albatross\", \"grey whale\", \"killer whale\", \"dugong\", \"sea lion\", \"Chihuahua\", \"Japanese Chin\", \"Maltese\", \"Pekingese\", \"Shih Tzu\", \"King Charles Spaniel\", \"Papillon\", \"toy terrier\", \"Rhodesian Ridgeback\", \"Afghan Hound\", \"Basset Hound\", \"Beagle\", \"Bloodhound\", \"Bluetick Coonhound\", \"Black and Tan Coonhound\", \"Treeing Walker Coonhound\", \"English foxhound\", \"Redbone Coonhound\", \"borzoi\", \"Irish Wolfhound\", \"Italian Greyhound\", \"Whippet\", \"Ibizan Hound\", \"Norwegian Elkhound\", \"Otterhound\", \"Saluki\", \"Scottish Deerhound\", \"Weimaraner\", \"Staffordshire Bull Terrier\", \"American Staffordshire Terrier\", \"Bedlington Terrier\", \"Border Terrier\", \"Kerry Blue Terrier\", \"Irish Terrier\", \"Norfolk Terrier\", \"Norwich Terrier\", \"Yorkshire Terrier\", \"Wire Fox Terrier\", \"Lakeland Terrier\", \"Sealyham Terrier\", \"Airedale Terrier\", \"Cairn Terrier\", \"Australian Terrier\", \"Dandie Dinmont Terrier\", \"Boston Terrier\", \"Miniature Schnauzer\", \"Giant Schnauzer\", \"Standard Schnauzer\", \"Scottish Terrier\", \"Tibetan Terrier\", \"Australian Silky Terrier\", \"Soft-coated Wheaten Terrier\", \"West Highland White Terrier\", \"Lhasa Apso\", \"Flat-Coated Retriever\", \"Curly-coated Retriever\", \"Golden Retriever\", \"Labrador Retriever\", \"Chesapeake Bay Retriever\", \"German Shorthaired Pointer\", \"Vizsla\", \"English Setter\", \"Irish Setter\", \"Gordon Setter\", \"Brittany dog\", \"Clumber Spaniel\", \"English Springer Spaniel\", \"Welsh Springer Spaniel\", \"Cocker Spaniel\", \"Sussex Spaniel\", \"Irish Water Spaniel\", \"Kuvasz\", \"Schipperke\", \"Groenendael dog\", \"Malinois\", \"Briard\", \"Australian Kelpie\", \"Komondor\", \"Old English Sheepdog\", \"Shetland Sheepdog\", \"collie\", \"Border Collie\", \"Bouvier des Flandres dog\", \"Rottweiler\", \"German Shepherd Dog\", \"Dobermann\", \"Miniature Pinscher\", \"Greater Swiss Mountain Dog\", \"Bernese Mountain Dog\", \"Appenzeller Sennenhund\", \"Entlebucher Sennenhund\", \"Boxer\", \"Bullmastiff\", \"Tibetan Mastiff\", \"French Bulldog\", \"Great Dane\", \"St. Bernard\", \"husky\", \"Alaskan Malamute\", \"Siberian Husky\", \"Dalmatian\", \"Affenpinscher\", \"Basenji\", \"pug\", \"Leonberger\", \"Newfoundland dog\", \"Great Pyrenees dog\", \"Samoyed\", \"Pomeranian\", \"Chow Chow\", \"Keeshond\", \"brussels griffon\", \"Pembroke Welsh Corgi\", \"Cardigan Welsh Corgi\", \"Toy Poodle\", \"Miniature Poodle\", \"Standard Poodle\", \"Mexican hairless dog (xoloitzcuintli)\", \"grey wolf\", \"Alaskan tundra wolf\", \"red wolf or maned wolf\", \"coyote\", \"dingo\", \"dhole\", \"African wild dog\", \"hyena\", \"red fox\", \"kit fox\", \"Arctic fox\", \"grey fox\", \"tabby cat\", \"tiger cat\", \"Persian cat\", \"Siamese cat\", \"Egyptian Mau\", \"cougar\", \"lynx\", \"leopard\", \"snow leopard\", \"jaguar\", \"lion\", \"tiger\", \"cheetah\", \"brown bear\", \"American black bear\", \"polar bear\", \"sloth bear\", \"mongoose\", \"meerkat\", \"tiger beetle\", \"ladybug\", \"ground beetle\", \"longhorn beetle\", \"leaf beetle\", \"dung beetle\", \"rhinoceros beetle\", \"weevil\", \"fly\", \"bee\", \"ant\", \"grasshopper\", \"cricket insect\", \"stick insect\", \"cockroach\", \"praying mantis\", \"cicada\", \"leafhopper\", \"lacewing\", \"dragonfly\", \"damselfly\", \"red admiral butterfly\", \"ringlet butterfly\", \"monarch butterfly\", \"small white butterfly\", \"sulphur butterfly\", \"gossamer-winged butterfly\", \"starfish\", \"sea urchin\", \"sea cucumber\", \"cottontail rabbit\", \"hare\", \"Angora rabbit\", \"hamster\", \"porcupine\", \"fox squirrel\", \"marmot\", \"beaver\", \"guinea pig\", \"common sorrel horse\", \"zebra\", \"pig\", \"wild boar\", \"warthog\", \"hippopotamus\", \"ox\", \"water buffalo\", \"bison\", \"ram (adult male sheep)\", \"bighorn sheep\", \"Alpine ibex\", \"hartebeest\", \"impala (antelope)\", \"gazelle\", \"arabian camel\", \"llama\", \"weasel\", \"mink\", \"European polecat\", \"black-footed ferret\", \"otter\", \"skunk\", \"badger\", \"armadillo\", \"three-toed sloth\", \"orangutan\", \"gorilla\", \"chimpanzee\", \"gibbon\", \"siamang\", \"guenon\", \"patas monkey\", \"baboon\", \"macaque\", \"langur\", \"black-and-white colobus\", \"proboscis monkey\", \"marmoset\", \"white-headed capuchin\", \"howler monkey\", \"titi monkey\", \"Geoffroy's spider monkey\", \"common squirrel monkey\", \"ring-tailed lemur\", \"indri\", \"Asian elephant\", \"African bush elephant\", \"red panda\", \"giant panda\", \"snoek fish\", \"eel\", \"silver salmon\", \"rock beauty fish\", \"clownfish\", \"sturgeon\", \"gar fish\", \"lionfish\", \"pufferfish\", \"abacus\", \"abaya\", \"academic gown\", \"accordion\", \"acoustic guitar\", \"aircraft carrier\", \"airliner\", \"airship\", \"altar\", \"ambulance\", \"amphibious vehicle\", \"analog clock\", \"apiary\", \"apron\", \"trash can\", \"assault rifle\", \"backpack\", \"bakery\", \"balance beam\", \"balloon\", \"ballpoint pen\", \"Band-Aid\", \"banjo\", \"baluster / handrail\", \"barbell\", \"barber chair\", \"barbershop\", \"barn\", \"barometer\", \"barrel\", \"wheelbarrow\", \"baseball\", \"basketball\", \"bassinet\", \"bassoon\", \"swimming cap\", \"bath towel\", \"bathtub\", \"station wagon\", \"lighthouse\", \"beaker\", \"military hat (bearskin or shako)\", \"beer bottle\", \"beer glass\", \"bell tower\", \"baby bib\", \"tandem bicycle\", \"bikini\", \"ring binder\", \"binoculars\", \"birdhouse\", \"boathouse\", \"bobsleigh\", \"bolo tie\", \"poke bonnet\", \"bookcase\", \"bookstore\", \"bottle cap\", \"hunting bow\", \"bow tie\", \"brass memorial plaque\", \"bra\", \"breakwater\", \"breastplate\", \"broom\", \"bucket\", \"buckle\", \"bulletproof vest\", \"high-speed train\", \"butcher shop\", \"taxicab\", \"cauldron\", \"candle\", \"cannon\", \"canoe\", \"can opener\", \"cardigan\", \"car mirror\", \"carousel\", \"tool kit\", \"cardboard box / carton\", \"car wheel\", \"automated teller machine\", \"cassette\", \"cassette player\", \"castle\", \"catamaran\", \"CD player\", \"cello\", \"mobile phone\", \"chain\", \"chain-link fence\", \"chain mail\", \"chainsaw\", \"storage chest\", \"chiffonier\", \"bell or wind chime\", \"china cabinet\", \"Christmas stocking\", \"church\", \"movie theater\", \"cleaver\", \"cliff dwelling\", \"cloak\", \"clogs\", \"cocktail shaker\", \"coffee mug\", \"coffeemaker\", \"spiral or coil\", \"combination lock\", \"computer keyboard\", \"candy store\", \"container ship\", \"convertible\", \"corkscrew\", \"cornet\", \"cowboy boot\", \"cowboy hat\", \"cradle\", \"construction crane\", \"crash helmet\", \"crate\", \"infant bed\", \"Crock Pot\", \"croquet ball\", \"crutch\", \"cuirass\", \"dam\", \"desk\", \"desktop computer\", \"rotary dial telephone\", \"diaper\", \"digital clock\", \"digital watch\", \"dining table\", \"dishcloth\", \"dishwasher\", \"disc brake\", \"dock\", \"dog sled\", \"dome\", \"doormat\", \"drilling rig\", \"drum\", \"drumstick\", \"dumbbell\", \"Dutch oven\", \"electric fan\", \"electric guitar\", \"electric locomotive\", \"entertainment center\", \"envelope\", \"espresso machine\", \"face powder\", \"feather boa\", \"filing cabinet\", \"fireboat\", \"fire truck\", \"fire screen\", \"flagpole\", \"flute\", \"folding chair\", \"football helmet\", \"forklift\", \"fountain\", \"fountain pen\", \"four-poster bed\", \"freight car\", \"French horn\", \"frying pan\", \"fur coat\", \"garbage truck\", \"gas mask or respirator\", \"gas pump\", \"goblet\", \"go-kart\", \"golf ball\", \"golf cart\", \"gondola\", \"gong\", \"gown\", \"grand piano\", \"greenhouse\", \"radiator grille\", \"grocery store\", \"guillotine\", \"hair clip\", \"hair spray\", \"half-track\", \"hammer\", \"hamper\", \"hair dryer\", \"hand-held computer\", \"handkerchief\", \"hard disk drive\", \"harmonica\", \"harp\", \"combine harvester\", \"hatchet\", \"holster\", \"home theater\", \"honeycomb\", \"hook\", \"hoop skirt\", \"gymnastic horizontal bar\", \"horse-drawn vehicle\", \"hourglass\", \"iPod\", \"clothes iron\", \"carved pumpkin\", \"jeans\", \"jeep\", \"T-shirt\", \"jigsaw puzzle\", \"rickshaw\", \"joystick\", \"kimono\", \"knee pad\", \"knot\", \"lab coat\", \"ladle\", \"lampshade\", \"laptop computer\", \"lawn mower\", \"lens cap\", \"letter opener\", \"library\", \"lifeboat\", \"lighter\", \"limousine\", \"ocean liner\", \"lipstick\", \"slip-on shoe\", \"lotion\", \"music speaker\", \"loupe magnifying glass\", \"sawmill\", \"magnetic compass\", \"messenger bag\", \"mailbox\", \"tights\", \"one-piece bathing suit\", \"manhole cover\", \"maraca\", \"marimba\", \"mask\", \"matchstick\", \"maypole\", \"maze\", \"measuring cup\", \"medicine cabinet\", \"megalith\", \"microphone\", \"microwave oven\", \"military uniform\", \"milk can\", \"minibus\", \"miniskirt\", \"minivan\", \"missile\", \"mitten\", \"mixing bowl\", \"mobile home\", \"ford model t\", \"modem\", \"monastery\", \"monitor\", \"moped\", \"mortar and pestle\", \"graduation cap\", \"mosque\", \"mosquito net\", \"vespa\", \"mountain bike\", \"tent\", \"computer mouse\", \"mousetrap\", \"moving van\", \"muzzle\", \"metal nail\", \"neck brace\", \"necklace\", \"baby pacifier\", \"notebook computer\", \"obelisk\", \"oboe\", \"ocarina\", \"odometer\", \"oil filter\", \"pipe organ\", \"oscilloscope\", \"overskirt\", \"bullock cart\", \"oxygen mask\", \"product packet / packaging\", \"paddle\", \"paddle wheel\", \"padlock\", \"paintbrush\", \"pajamas\", \"palace\", \"pan flute\", \"paper towel\", \"parachute\", \"parallel bars\", \"park bench\", \"parking meter\", \"railroad car\", \"patio\", \"payphone\", \"pedestal\", \"pencil case\", \"pencil sharpener\", \"perfume\", \"Petri dish\", \"photocopier\", \"plectrum\", \"Pickelhaube\", \"picket fence\", \"pickup truck\", \"pier\", \"piggy bank\", \"pill bottle\", \"pillow\", \"ping-pong ball\", \"pinwheel\", \"pirate ship\", \"drink pitcher\", \"block plane\", \"planetarium\", \"plastic bag\", \"plate rack\", \"farm plow\", \"plunger\", \"Polaroid camera\", \"pole\", \"police van\", \"poncho\", \"pool table\", \"soda bottle\", \"plant pot\", \"potter's wheel\", \"power drill\", \"prayer rug\", \"printer\", \"prison\", \"missile\", \"projector\", \"hockey puck\", \"punching bag\", \"purse\", \"quill\", \"quilt\", \"race car\", \"racket\", \"radiator\", \"radio\", \"radio telescope\", \"rain barrel\", \"recreational vehicle\", \"fishing casting reel\", \"reflex camera\", \"refrigerator\", \"remote control\", \"restaurant\", \"revolver\", \"rifle\", \"rocking chair\", \"rotisserie\", \"eraser\", \"rugby ball\", \"ruler measuring stick\", \"sneaker\", \"safe\", \"safety pin\", \"salt shaker\", \"sandal\", \"sarong\", \"saxophone\", \"scabbard\", \"weighing scale\", \"school bus\", \"schooner\", \"scoreboard\", \"CRT monitor\", \"screw\", \"screwdriver\", \"seat belt\", \"sewing machine\", \"shield\", \"shoe store\", \"shoji screen / room divider\", \"shopping basket\", \"shopping cart\", \"shovel\", \"shower cap\", \"shower curtain\", \"ski\", \"balaclava ski mask\", \"sleeping bag\", \"slide rule\", \"sliding door\", \"slot machine\", \"snorkel\", \"snowmobile\", \"snowplow\", \"soap dispenser\", \"soccer ball\", \"sock\", \"solar thermal collector\", \"sombrero\", \"soup bowl\", \"keyboard space bar\", \"space heater\", \"space shuttle\", \"spatula\", \"motorboat\", \"spider web\", \"spindle\", \"sports car\", \"spotlight\", \"stage\", \"steam locomotive\", \"through arch bridge\", \"steel drum\", \"stethoscope\", \"scarf\", \"stone wall\", \"stopwatch\", \"stove\", \"strainer\", \"tram\", \"stretcher\", \"couch\", \"stupa\", \"submarine\", \"suit\", \"sundial\", \"sunglasses\", \"sunglasses\", \"sunscreen\", \"suspension bridge\", \"mop\", \"sweatshirt\", \"swim trunks / shorts\", \"swing\", \"electrical switch\", \"syringe\", \"table lamp\", \"tank\", \"tape player\", \"teapot\", \"teddy bear\", \"television\", \"tennis ball\", \"thatched roof\", \"front curtain\", \"thimble\", \"threshing machine\", \"throne\", \"tile roof\", \"toaster\", \"tobacco shop\", \"toilet seat\", \"torch\", \"totem pole\", \"tow truck\", \"toy store\", \"tractor\", \"semi-trailer truck\", \"tray\", \"trench coat\", \"tricycle\", \"trimaran\", \"tripod\", \"triumphal arch\", \"trolleybus\", \"trombone\", \"hot tub\", \"turnstile\", \"typewriter keyboard\", \"umbrella\", \"unicycle\", \"upright piano\", \"vacuum cleaner\", \"vase\", \"vaulted or arched ceiling\", \"velvet fabric\", \"vending machine\", \"vestment\", \"viaduct\", \"violin\", \"volleyball\", \"waffle iron\", \"wall clock\", \"wallet\", \"wardrobe\", \"military aircraft\", \"sink\", \"washing machine\", \"water bottle\", \"water jug\", \"water tower\", \"whiskey jug\", \"whistle\", \"hair wig\", \"window screen\", \"window shade\", \"Windsor tie\", \"wine bottle\", \"airplane wing\", \"wok\", \"wooden spoon\", \"wool\", \"split-rail fence\", \"shipwreck\", \"sailboat\", \"yurt\", \"website\", \"comic book\", \"crossword\", \"traffic or street sign\", \"traffic light\", \"dust jacket\", \"menu\", \"plate\", \"guacamole\", \"consomme\", \"hot pot\", \"trifle\", \"ice cream\", \"popsicle\", \"baguette\", \"bagel\", \"pretzel\", \"cheeseburger\", \"hot dog\", \"mashed potatoes\", \"cabbage\", \"broccoli\", \"cauliflower\", \"zucchini\", \"spaghetti squash\", \"acorn squash\", \"butternut squash\", \"cucumber\", \"artichoke\", \"bell pepper\", \"cardoon\", \"mushroom\", \"Granny Smith apple\", \"strawberry\", \"orange\", \"lemon\", \"fig\", \"pineapple\", \"banana\", \"jackfruit\", \"cherimoya (custard apple)\", \"pomegranate\", \"hay\", \"carbonara\", \"chocolate syrup\", \"dough\", \"meatloaf\", \"pizza\", \"pot pie\", \"burrito\", \"red wine\", \"espresso\", \"tea cup\", \"eggnog\", \"mountain\", \"bubble\", \"cliff\", \"coral reef\", \"geyser\", \"lakeshore\", \"promontory\", \"sandbar\", \"beach\", \"valley\", \"volcano\", \"baseball player\", \"bridegroom\", \"scuba diver\", \"rapeseed\", \"daisy\", \"yellow lady's slipper\", \"corn\", \"acorn\", \"rose hip\", \"horse chestnut seed\", \"coral fungus\", \"agaric\", \"gyromitra\", \"stinkhorn mushroom\", \"earth star fungus\", \"hen of the woods mushroom\", \"bolete\", \"corn cob\", \"toilet paper\"]"
|
800 |
+
],
|
801 |
+
"execution_count": 8,
|
802 |
+
"outputs": []
|
803 |
+
},
|
804 |
+
{
|
805 |
+
"cell_type": "markdown",
|
806 |
+
"metadata": {
|
807 |
+
"id": "eMQSCuBta2G6"
|
808 |
+
},
|
809 |
+
"source": [
|
810 |
+
"A subset of these class names are modified from the default ImageNet class names sourced from Anish Athalye's imagenet-simple-labels.\n",
|
811 |
+
"\n",
|
812 |
+
"These edits were made via trial and error and concentrated on the lowest performing classes according to top_1 and top_5 accuracy on the ImageNet training set for the RN50, RN101, and RN50x4 models. These tweaks improve top_1 by 1.5% on ViT-B/32 over using the default class names. Alec got bored somewhere along the way as gains started to diminish and never finished updating / tweaking the list. He also didn't revisit this with the better performing RN50x16, RN50x64, or any of the ViT models. He thinks it's likely another 0.5% to 1% top_1 could be gained from further work here. It'd be interesting to more rigorously study / understand this.\n",
|
813 |
+
"\n",
|
814 |
+
"Some examples beyond the crane/crane -> construction crane / bird crane issue mentioned in Section 3.1.4 of the paper include:\n",
|
815 |
+
"\n",
|
816 |
+
"- CLIP interprets \"nail\" as \"fingernail\" so we changed the label to \"metal nail\".\n",
|
817 |
+
"- ImageNet kite class refers to the bird of prey, not the flying toy, so we changed \"kite\" to \"kite (bird of prey)\"\n",
|
818 |
+
"- The ImageNet class for red wolf seems to include a lot of mislabeled maned wolfs so we changed \"red wolf\" to \"red wolf or maned wolf\""
|
819 |
+
]
|
820 |
+
},
|
821 |
+
{
|
822 |
+
"cell_type": "code",
|
823 |
+
"metadata": {
|
824 |
+
"id": "toGtcd-Ji_MD",
|
825 |
+
"colab": {
|
826 |
+
"base_uri": "https://localhost:8080/"
|
827 |
+
},
|
828 |
+
"outputId": "46bcc85f-3968-4836-f3c6-e48848e944c4"
|
829 |
+
},
|
830 |
+
"source": [
|
831 |
+
"imagenet_templates = [\n",
|
832 |
+
" 'a bad photo of a {}.',\n",
|
833 |
+
" 'a photo of many {}.',\n",
|
834 |
+
" 'a sculpture of a {}.',\n",
|
835 |
+
" 'a photo of the hard to see {}.',\n",
|
836 |
+
" 'a low resolution photo of the {}.',\n",
|
837 |
+
" 'a rendering of a {}.',\n",
|
838 |
+
" 'graffiti of a {}.',\n",
|
839 |
+
" 'a bad photo of the {}.',\n",
|
840 |
+
" 'a cropped photo of the {}.',\n",
|
841 |
+
" 'a tattoo of a {}.',\n",
|
842 |
+
" 'the embroidered {}.',\n",
|
843 |
+
" 'a photo of a hard to see {}.',\n",
|
844 |
+
" 'a bright photo of a {}.',\n",
|
845 |
+
" 'a photo of a clean {}.',\n",
|
846 |
+
" 'a photo of a dirty {}.',\n",
|
847 |
+
" 'a dark photo of the {}.',\n",
|
848 |
+
" 'a drawing of a {}.',\n",
|
849 |
+
" 'a photo of my {}.',\n",
|
850 |
+
" 'the plastic {}.',\n",
|
851 |
+
" 'a photo of the cool {}.',\n",
|
852 |
+
" 'a close-up photo of a {}.',\n",
|
853 |
+
" 'a black and white photo of the {}.',\n",
|
854 |
+
" 'a painting of the {}.',\n",
|
855 |
+
" 'a painting of a {}.',\n",
|
856 |
+
" 'a pixelated photo of the {}.',\n",
|
857 |
+
" 'a sculpture of the {}.',\n",
|
858 |
+
" 'a bright photo of the {}.',\n",
|
859 |
+
" 'a cropped photo of a {}.',\n",
|
860 |
+
" 'a plastic {}.',\n",
|
861 |
+
" 'a photo of the dirty {}.',\n",
|
862 |
+
" 'a jpeg corrupted photo of a {}.',\n",
|
863 |
+
" 'a blurry photo of the {}.',\n",
|
864 |
+
" 'a photo of the {}.',\n",
|
865 |
+
" 'a good photo of the {}.',\n",
|
866 |
+
" 'a rendering of the {}.',\n",
|
867 |
+
" 'a {} in a video game.',\n",
|
868 |
+
" 'a photo of one {}.',\n",
|
869 |
+
" 'a doodle of a {}.',\n",
|
870 |
+
" 'a close-up photo of the {}.',\n",
|
871 |
+
" 'a photo of a {}.',\n",
|
872 |
+
" 'the origami {}.',\n",
|
873 |
+
" 'the {} in a video game.',\n",
|
874 |
+
" 'a sketch of a {}.',\n",
|
875 |
+
" 'a doodle of the {}.',\n",
|
876 |
+
" 'a origami {}.',\n",
|
877 |
+
" 'a low resolution photo of a {}.',\n",
|
878 |
+
" 'the toy {}.',\n",
|
879 |
+
" 'a rendition of the {}.',\n",
|
880 |
+
" 'a photo of the clean {}.',\n",
|
881 |
+
" 'a photo of a large {}.',\n",
|
882 |
+
" 'a rendition of a {}.',\n",
|
883 |
+
" 'a photo of a nice {}.',\n",
|
884 |
+
" 'a photo of a weird {}.',\n",
|
885 |
+
" 'a blurry photo of a {}.',\n",
|
886 |
+
" 'a cartoon {}.',\n",
|
887 |
+
" 'art of a {}.',\n",
|
888 |
+
" 'a sketch of the {}.',\n",
|
889 |
+
" 'a embroidered {}.',\n",
|
890 |
+
" 'a pixelated photo of a {}.',\n",
|
891 |
+
" 'itap of the {}.',\n",
|
892 |
+
" 'a jpeg corrupted photo of the {}.',\n",
|
893 |
+
" 'a good photo of a {}.',\n",
|
894 |
+
" 'a plushie {}.',\n",
|
895 |
+
" 'a photo of the nice {}.',\n",
|
896 |
+
" 'a photo of the small {}.',\n",
|
897 |
+
" 'a photo of the weird {}.',\n",
|
898 |
+
" 'the cartoon {}.',\n",
|
899 |
+
" 'art of the {}.',\n",
|
900 |
+
" 'a drawing of the {}.',\n",
|
901 |
+
" 'a photo of the large {}.',\n",
|
902 |
+
" 'a black and white photo of a {}.',\n",
|
903 |
+
" 'the plushie {}.',\n",
|
904 |
+
" 'a dark photo of a {}.',\n",
|
905 |
+
" 'itap of a {}.',\n",
|
906 |
+
" 'graffiti of the {}.',\n",
|
907 |
+
" 'a toy {}.',\n",
|
908 |
+
" 'itap of my {}.',\n",
|
909 |
+
" 'a photo of a cool {}.',\n",
|
910 |
+
" 'a photo of a small {}.',\n",
|
911 |
+
" 'a tattoo of the {}.',\n",
|
912 |
+
"]\n",
|
913 |
+
"\n",
|
914 |
+
"print(f\"{len(imagenet_classes)} classes, {len(imagenet_templates)} templates\")"
|
915 |
+
],
|
916 |
+
"execution_count": 9,
|
917 |
+
"outputs": [
|
918 |
+
{
|
919 |
+
"output_type": "stream",
|
920 |
+
"text": [
|
921 |
+
"1000 classes, 80 templates\n"
|
922 |
+
],
|
923 |
+
"name": "stdout"
|
924 |
+
}
|
925 |
+
]
|
926 |
+
},
|
927 |
+
{
|
928 |
+
"cell_type": "markdown",
|
929 |
+
"metadata": {
|
930 |
+
"id": "aRB5OzgpHwqQ"
|
931 |
+
},
|
932 |
+
"source": [
|
933 |
+
"A similar, intuition-guided trial and error based on the ImageNet training set was used for templates. This list is pretty haphazard and was gradually made / expanded over the course of about a year of the project and was revisited / tweaked every few months. A surprising / weird thing was adding templates intended to help ImageNet-R performance (specifying different possible renditions of an object) improved standard ImageNet accuracy too.\n",
|
934 |
+
"\n",
|
935 |
+
"After the 80 templates were \"locked\" for the paper, we ran sequential forward selection over the list of 80 templates. The search terminated after ensembling 7 templates and selected them in the order below.\n",
|
936 |
+
"\n",
|
937 |
+
"1. itap of a {}.\n",
|
938 |
+
"2. a bad photo of the {}.\n",
|
939 |
+
"3. a origami {}.\n",
|
940 |
+
"4. a photo of the large {}.\n",
|
941 |
+
"5. a {} in a video game.\n",
|
942 |
+
"6. art of the {}.\n",
|
943 |
+
"7. a photo of the small {}.\n",
|
944 |
+
"\n",
|
945 |
+
"Speculating, we think it's interesting to see different scales (large and small), a difficult view (a bad photo), and \"abstract\" versions (origami, video game, art), were all selected for, but we haven't studied this in any detail. This subset performs a bit better than the full 80 ensemble reported in the paper, especially for the smaller models."
|
946 |
+
]
|
947 |
+
},
|
948 |
+
{
|
949 |
+
"cell_type": "markdown",
|
950 |
+
"metadata": {
|
951 |
+
"id": "4W8ARJVqBJXs"
|
952 |
+
},
|
953 |
+
"source": [
|
954 |
+
"# Loading the Images\n",
|
955 |
+
"\n",
|
956 |
+
"The ILSVRC2012 datasets are no longer available for download publicly. We instead download the ImageNet-V2 dataset by [Recht et al.](https://arxiv.org/abs/1902.10811).\n",
|
957 |
+
"\n",
|
958 |
+
"If you have the ImageNet dataset downloaded, you can replace the dataset with the official torchvision loader, e.g.:\n",
|
959 |
+
"\n",
|
960 |
+
"```python\n",
|
961 |
+
"images = torchvision.datasets.ImageNet(\"path/to/imagenet\", split='val', transform=preprocess)\n",
|
962 |
+
"```"
|
963 |
+
]
|
964 |
+
},
|
965 |
+
{
|
966 |
+
"cell_type": "code",
|
967 |
+
"metadata": {
|
968 |
+
"colab": {
|
969 |
+
"base_uri": "https://localhost:8080/"
|
970 |
+
},
|
971 |
+
"id": "moHR4UlHKsDc",
|
972 |
+
"outputId": "178f6d0d-9a34-4cbc-c9c1-e7ce09927980"
|
973 |
+
},
|
974 |
+
"source": [
|
975 |
+
"! pip install git+https://github.com/modestyachts/ImageNetV2_pytorch\n",
|
976 |
+
"\n",
|
977 |
+
"from imagenetv2_pytorch import ImageNetV2Dataset\n",
|
978 |
+
"\n",
|
979 |
+
"images = ImageNetV2Dataset(transform=preprocess)\n",
|
980 |
+
"loader = torch.utils.data.DataLoader(images, batch_size=32, num_workers=16)"
|
981 |
+
],
|
982 |
+
"execution_count": 10,
|
983 |
+
"outputs": [
|
984 |
+
{
|
985 |
+
"output_type": "stream",
|
986 |
+
"text": [
|
987 |
+
"Collecting git+https://github.com/modestyachts/ImageNetV2_pytorch\n",
|
988 |
+
" Cloning https://github.com/modestyachts/ImageNetV2_pytorch to /tmp/pip-req-build-2fnslbyv\n",
|
989 |
+
" Running command git clone -q https://github.com/modestyachts/ImageNetV2_pytorch /tmp/pip-req-build-2fnslbyv\n",
|
990 |
+
"Building wheels for collected packages: imagenetv2-pytorch\n",
|
991 |
+
" Building wheel for imagenetv2-pytorch (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
992 |
+
" Created wheel for imagenetv2-pytorch: filename=imagenetv2_pytorch-0.1-cp36-none-any.whl size=2665 sha256=0978fc64026ab86ace52a9f3ebcef53331c43288433173c450a4b5ddcc197f31\n",
|
993 |
+
" Stored in directory: /tmp/pip-ephem-wheel-cache-4eewuaap/wheels/f7/09/0d/03ded955ce95b04c9590b999ae9be076bb5d8f389650aa2147\n",
|
994 |
+
"Successfully built imagenetv2-pytorch\n",
|
995 |
+
"Installing collected packages: imagenetv2-pytorch\n",
|
996 |
+
"Successfully installed imagenetv2-pytorch-0.1\n",
|
997 |
+
"Dataset matched-frequency not found on disk, downloading....\n"
|
998 |
+
],
|
999 |
+
"name": "stdout"
|
1000 |
+
},
|
1001 |
+
{
|
1002 |
+
"output_type": "stream",
|
1003 |
+
"text": [
|
1004 |
+
"100%|██████████| 1.26G/1.26G [00:35<00:00, 35.7MiB/s]\n"
|
1005 |
+
],
|
1006 |
+
"name": "stderr"
|
1007 |
+
},
|
1008 |
+
{
|
1009 |
+
"output_type": "stream",
|
1010 |
+
"text": [
|
1011 |
+
"Extracting....\n"
|
1012 |
+
],
|
1013 |
+
"name": "stdout"
|
1014 |
+
}
|
1015 |
+
]
|
1016 |
+
},
|
1017 |
+
{
|
1018 |
+
"cell_type": "markdown",
|
1019 |
+
"metadata": {
|
1020 |
+
"id": "fz6D-F-Wbrtp"
|
1021 |
+
},
|
1022 |
+
"source": [
|
1023 |
+
"# Creating zero-shot classifier weights"
|
1024 |
+
]
|
1025 |
+
},
|
1026 |
+
{
|
1027 |
+
"cell_type": "code",
|
1028 |
+
"metadata": {
|
1029 |
+
"colab": {
|
1030 |
+
"base_uri": "https://localhost:8080/",
|
1031 |
+
"height": 66,
|
1032 |
+
"referenced_widgets": [
|
1033 |
+
"4e3a3f83649f45f8bef3434980634664",
|
1034 |
+
"f066bdb766664c788ba1e9de8d311e22",
|
1035 |
+
"4e7a7427d28a4ae684e0be4548eb9944",
|
1036 |
+
"cc9dc019c1334a46b2558ffa6c0dd6e6",
|
1037 |
+
"285c877d4f644f3a8a58c4eb5948101c",
|
1038 |
+
"075d6545e02e419ca565589eb5ffc318",
|
1039 |
+
"53f9106c80e84d5b8c3ec96162d1db98",
|
1040 |
+
"19c57d99e7c44cbda508ce558fde435d"
|
1041 |
+
]
|
1042 |
+
},
|
1043 |
+
"id": "sRqDoz1Gbsii",
|
1044 |
+
"outputId": "5ab6c001-8a5e-42c9-ab46-4477a693229c"
|
1045 |
+
},
|
1046 |
+
"source": [
|
1047 |
+
"def zeroshot_classifier(classnames, templates):\n",
|
1048 |
+
" with torch.no_grad():\n",
|
1049 |
+
" zeroshot_weights = []\n",
|
1050 |
+
" for classname in tqdm(classnames):\n",
|
1051 |
+
" texts = [template.format(classname) for template in templates] #format with class\n",
|
1052 |
+
" texts = clip.tokenize(texts).cuda() #tokenize\n",
|
1053 |
+
" class_embeddings = model.encode_text(texts) #embed with text encoder\n",
|
1054 |
+
" class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True)\n",
|
1055 |
+
" class_embedding = class_embeddings.mean(dim=0)\n",
|
1056 |
+
" class_embedding /= class_embedding.norm()\n",
|
1057 |
+
" zeroshot_weights.append(class_embedding)\n",
|
1058 |
+
" zeroshot_weights = torch.stack(zeroshot_weights, dim=1).cuda()\n",
|
1059 |
+
" return zeroshot_weights\n",
|
1060 |
+
"\n",
|
1061 |
+
"\n",
|
1062 |
+
"zeroshot_weights = zeroshot_classifier(imagenet_classes, imagenet_templates)"
|
1063 |
+
],
|
1064 |
+
"execution_count": 11,
|
1065 |
+
"outputs": [
|
1066 |
+
{
|
1067 |
+
"output_type": "display_data",
|
1068 |
+
"data": {
|
1069 |
+
"application/vnd.jupyter.widget-view+json": {
|
1070 |
+
"model_id": "4e3a3f83649f45f8bef3434980634664",
|
1071 |
+
"version_minor": 0,
|
1072 |
+
"version_major": 2
|
1073 |
+
},
|
1074 |
+
"text/plain": [
|
1075 |
+
"HBox(children=(FloatProgress(value=0.0, max=1000.0), HTML(value='')))"
|
1076 |
+
]
|
1077 |
+
},
|
1078 |
+
"metadata": {
|
1079 |
+
"tags": []
|
1080 |
+
}
|
1081 |
+
},
|
1082 |
+
{
|
1083 |
+
"output_type": "stream",
|
1084 |
+
"text": [
|
1085 |
+
"\n"
|
1086 |
+
],
|
1087 |
+
"name": "stdout"
|
1088 |
+
}
|
1089 |
+
]
|
1090 |
+
},
|
1091 |
+
{
|
1092 |
+
"cell_type": "markdown",
|
1093 |
+
"metadata": {
|
1094 |
+
"id": "1fZo7hG8iJP5"
|
1095 |
+
},
|
1096 |
+
"source": [
|
1097 |
+
"# Zero-shot prediction"
|
1098 |
+
]
|
1099 |
+
},
|
1100 |
+
{
|
1101 |
+
"cell_type": "code",
|
1102 |
+
"metadata": {
|
1103 |
+
"id": "j4kPSZoShQxN"
|
1104 |
+
},
|
1105 |
+
"source": [
|
1106 |
+
"def accuracy(output, target, topk=(1,)):\n",
|
1107 |
+
" pred = output.topk(max(topk), 1, True, True)[1].t()\n",
|
1108 |
+
" correct = pred.eq(target.view(1, -1).expand_as(pred))\n",
|
1109 |
+
" return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk]"
|
1110 |
+
],
|
1111 |
+
"execution_count": 12,
|
1112 |
+
"outputs": []
|
1113 |
+
},
|
1114 |
+
{
|
1115 |
+
"cell_type": "code",
|
1116 |
+
"metadata": {
|
1117 |
+
"colab": {
|
1118 |
+
"base_uri": "https://localhost:8080/",
|
1119 |
+
"height": 100,
|
1120 |
+
"referenced_widgets": [
|
1121 |
+
"fbb2b937b22049f5987f39f48c652a86",
|
1122 |
+
"0a1b6b76984349ccb36ca2fc4a4a0208",
|
1123 |
+
"c136afb47aa14ac2832093ee415c6f3e",
|
1124 |
+
"467a151e73744eccb199fe72aa352e5b",
|
1125 |
+
"f6d637c3fc3c46928d023441227130e5",
|
1126 |
+
"029e6eadacb8480193aab52ff073be8f",
|
1127 |
+
"30178355f76742898d37966b3875ef0a",
|
1128 |
+
"2e62544c03d64d6d92b94fcfaca2fc90"
|
1129 |
+
]
|
1130 |
+
},
|
1131 |
+
"id": "wKJ7YsdlkDXo",
|
1132 |
+
"outputId": "90e084fd-86bc-4a52-a06e-61bff7aa86e0"
|
1133 |
+
},
|
1134 |
+
"source": [
|
1135 |
+
"with torch.no_grad():\n",
|
1136 |
+
" top1, top5, n = 0., 0., 0.\n",
|
1137 |
+
" for i, (images, target) in enumerate(tqdm(loader)):\n",
|
1138 |
+
" images = images.cuda()\n",
|
1139 |
+
" target = target.cuda()\n",
|
1140 |
+
" \n",
|
1141 |
+
" # predict\n",
|
1142 |
+
" image_features = model.encode_image(images)\n",
|
1143 |
+
" image_features /= image_features.norm(dim=-1, keepdim=True)\n",
|
1144 |
+
" logits = 100. * image_features @ zeroshot_weights\n",
|
1145 |
+
"\n",
|
1146 |
+
" # measure accuracy\n",
|
1147 |
+
" acc1, acc5 = accuracy(logits, target, topk=(1, 5))\n",
|
1148 |
+
" top1 += acc1\n",
|
1149 |
+
" top5 += acc5\n",
|
1150 |
+
" n += images.size(0)\n",
|
1151 |
+
"\n",
|
1152 |
+
"top1 = (top1 / n) * 100\n",
|
1153 |
+
"top5 = (top5 / n) * 100 \n",
|
1154 |
+
"\n",
|
1155 |
+
"print(f\"Top-1 accuracy: {top1:.2f}\")\n",
|
1156 |
+
"print(f\"Top-5 accuracy: {top5:.2f}\")"
|
1157 |
+
],
|
1158 |
+
"execution_count": 13,
|
1159 |
+
"outputs": [
|
1160 |
+
{
|
1161 |
+
"output_type": "display_data",
|
1162 |
+
"data": {
|
1163 |
+
"application/vnd.jupyter.widget-view+json": {
|
1164 |
+
"model_id": "fbb2b937b22049f5987f39f48c652a86",
|
1165 |
+
"version_minor": 0,
|
1166 |
+
"version_major": 2
|
1167 |
+
},
|
1168 |
+
"text/plain": [
|
1169 |
+
"HBox(children=(FloatProgress(value=0.0, max=313.0), HTML(value='')))"
|
1170 |
+
]
|
1171 |
+
},
|
1172 |
+
"metadata": {
|
1173 |
+
"tags": []
|
1174 |
+
}
|
1175 |
+
},
|
1176 |
+
{
|
1177 |
+
"output_type": "stream",
|
1178 |
+
"text": [
|
1179 |
+
"\n",
|
1180 |
+
"Top-1 accuracy: 55.73\n",
|
1181 |
+
"Top-5 accuracy: 83.45\n"
|
1182 |
+
],
|
1183 |
+
"name": "stdout"
|
1184 |
+
}
|
1185 |
+
]
|
1186 |
+
}
|
1187 |
+
]
|
1188 |
+
}
|
CLIP/requirements.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
ftfy
|
2 |
+
regex
|
3 |
+
tqdm
|
4 |
+
torch
|
5 |
+
torchvision
|
CLIP/setup.py
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
|
3 |
+
import pkg_resources
|
4 |
+
from setuptools import setup, find_packages
|
5 |
+
|
6 |
+
setup(
|
7 |
+
name="clip",
|
8 |
+
py_modules=["clip"],
|
9 |
+
version="1.0",
|
10 |
+
description="",
|
11 |
+
author="OpenAI",
|
12 |
+
packages=find_packages(exclude=["tests*"]),
|
13 |
+
install_requires=[
|
14 |
+
str(r)
|
15 |
+
for r in pkg_resources.parse_requirements(
|
16 |
+
open(os.path.join(os.path.dirname(__file__), "requirements.txt"))
|
17 |
+
)
|
18 |
+
],
|
19 |
+
include_package_data=True,
|
20 |
+
extras_require={'dev': ['pytest']},
|
21 |
+
)
|
CLIP/tests/test_consistency.py
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import numpy as np
|
2 |
+
import pytest
|
3 |
+
import torch
|
4 |
+
from PIL import Image
|
5 |
+
|
6 |
+
import clip
|
7 |
+
|
8 |
+
|
9 |
+
@pytest.mark.parametrize('model_name', clip.available_models())
|
10 |
+
def test_consistency(model_name):
|
11 |
+
device = "cpu"
|
12 |
+
jit_model, transform = clip.load(model_name, device=device, jit=True)
|
13 |
+
py_model, _ = clip.load(model_name, device=device, jit=False)
|
14 |
+
|
15 |
+
image = transform(Image.open("CLIP.png")).unsqueeze(0).to(device)
|
16 |
+
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
|
17 |
+
|
18 |
+
with torch.no_grad():
|
19 |
+
logits_per_image, _ = jit_model(image, text)
|
20 |
+
jit_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
|
21 |
+
|
22 |
+
logits_per_image, _ = py_model(image, text)
|
23 |
+
py_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
|
24 |
+
|
25 |
+
assert np.allclose(jit_probs, py_probs, atol=0.01, rtol=0.1)
|
app.py
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
import os
|
2 |
os.system("pip install gradio==2.4.6")
|
3 |
os.system('pip freeze')
|
|
|
4 |
import torch
|
5 |
torch.hub.download_url_to_file('https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1', 'vqgan_imagenet_f16_16384.yaml')
|
6 |
torch.hub.download_url_to_file('https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fckpts%2Flast.ckpt&dl=1', 'vqgan_imagenet_f16_16384.ckpt')
|
@@ -353,6 +354,7 @@ def inference(text, seed, step_size, max_iterations, width, height, init_image,
|
|
353 |
except KeyboardInterrupt:
|
354 |
pass
|
355 |
writer = imageio.get_writer('test.mp4', fps=20)
|
|
|
356 |
for im in all_frames:
|
357 |
writer.append_data(np.array(im))
|
358 |
writer.close()
|
@@ -392,4 +394,5 @@ gr.Interface(
|
|
392 |
['a cabin in the mountains unreal engine',98,0.6, 120, 280, 280, 'cabin.jpeg', 0.0, 'cabin.jpeg',1,1.0]
|
393 |
],
|
394 |
enable_queue=True
|
395 |
-
).launch(debug=True)
|
|
|
|
1 |
import os
|
2 |
os.system("pip install gradio==2.4.6")
|
3 |
os.system('pip freeze')
|
4 |
+
|
5 |
import torch
|
6 |
torch.hub.download_url_to_file('https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1', 'vqgan_imagenet_f16_16384.yaml')
|
7 |
torch.hub.download_url_to_file('https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fckpts%2Flast.ckpt&dl=1', 'vqgan_imagenet_f16_16384.ckpt')
|
|
|
354 |
except KeyboardInterrupt:
|
355 |
pass
|
356 |
writer = imageio.get_writer('test.mp4', fps=20)
|
357 |
+
|
358 |
for im in all_frames:
|
359 |
writer.append_data(np.array(im))
|
360 |
writer.close()
|
|
|
394 |
['a cabin in the mountains unreal engine',98,0.6, 120, 280, 280, 'cabin.jpeg', 0.0, 'cabin.jpeg',1,1.0]
|
395 |
],
|
396 |
enable_queue=True
|
397 |
+
).launch(debug=True)
|
398 |
+
|
requirements.txt
CHANGED
@@ -1,5 +1,3 @@
|
|
1 |
-
#git+https://github.com/openai/CLIP
|
2 |
-
#git+https://github.com/CompVis/taming-transformers
|
3 |
ftfy
|
4 |
regex
|
5 |
tqdm
|
@@ -15,10 +13,3 @@ Pillow
|
|
15 |
numpy
|
16 |
imageio
|
17 |
nvidia_ml_py3
|
18 |
-
transformers
|
19 |
-
wget
|
20 |
-
stegano
|
21 |
-
python-xmp-toolkit
|
22 |
-
imgtag
|
23 |
-
pillow==7.1.2
|
24 |
-
imageio-ffmpeg
|
|
|
|
|
|
|
1 |
ftfy
|
2 |
regex
|
3 |
tqdm
|
|
|
13 |
numpy
|
14 |
imageio
|
15 |
nvidia_ml_py3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
steps/temp.txt
ADDED
File without changes
|
taming-transformers/License.txt
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Copyright (c) 2020 Patrick Esser and Robin Rombach and Björn Ommer
|
2 |
+
|
3 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
4 |
+
of this software and associated documentation files (the "Software"), to deal
|
5 |
+
in the Software without restriction, including without limitation the rights
|
6 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
7 |
+
copies of the Software, and to permit persons to whom the Software is
|
8 |
+
furnished to do so, subject to the following conditions:
|
9 |
+
|
10 |
+
The above copyright notice and this permission notice shall be included in all
|
11 |
+
copies or substantial portions of the Software.
|
12 |
+
|
13 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
14 |
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
15 |
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
16 |
+
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
17 |
+
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
18 |
+
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
|
19 |
+
OR OTHER DEALINGS IN THE SOFTWARE./
|
taming-transformers/README.md
ADDED
@@ -0,0 +1,377 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Taming Transformers for High-Resolution Image Synthesis
|
2 |
+
##### CVPR 2021 (Oral)
|
3 |
+
![teaser](assets/mountain.jpeg)
|
4 |
+
|
5 |
+
[**Taming Transformers for High-Resolution Image Synthesis**](https://compvis.github.io/taming-transformers/)<br/>
|
6 |
+
[Patrick Esser](https://github.com/pesser)\*,
|
7 |
+
[Robin Rombach](https://github.com/rromb)\*,
|
8 |
+
[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)<br/>
|
9 |
+
\* equal contribution
|
10 |
+
|
11 |
+
**tl;dr** We combine the efficiancy of convolutional approaches with the expressivity of transformers by introducing a convolutional VQGAN, which learns a codebook of context-rich visual parts, whose composition is modeled with an autoregressive transformer.
|
12 |
+
|
13 |
+
![teaser](assets/teaser.png)
|
14 |
+
[arXiv](https://arxiv.org/abs/2012.09841) | [BibTeX](#bibtex) | [Project Page](https://compvis.github.io/taming-transformers/)
|
15 |
+
|
16 |
+
|
17 |
+
### News
|
18 |
+
- Thanks to [rom1504](https://github.com/rom1504) it is now easy to [train a VQGAN on your own datasets](#training-on-custom-data).
|
19 |
+
- Included a bugfix for the quantizer. For backward compatibility it is
|
20 |
+
disabled by default (which corresponds to always training with `beta=1.0`).
|
21 |
+
Use `legacy=False` in the quantizer config to enable it.
|
22 |
+
Thanks [richcmwang](https://github.com/richcmwang) and [wcshin-git](https://github.com/wcshin-git)!
|
23 |
+
- Our paper received an update: See https://arxiv.org/abs/2012.09841v3 and the corresponding changelog.
|
24 |
+
- Added a pretrained, [1.4B transformer model](https://k00.fr/s511rwcv) trained for class-conditional ImageNet synthesis, which obtains state-of-the-art FID scores among autoregressive approaches and outperforms BigGAN.
|
25 |
+
- Added pretrained, unconditional models on [FFHQ](https://k00.fr/yndvfu95) and [CelebA-HQ](https://k00.fr/2xkmielf).
|
26 |
+
- Added accelerated sampling via caching of keys/values in the self-attention operation, used in `scripts/sample_fast.py`.
|
27 |
+
- Added a checkpoint of a [VQGAN](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) trained with f8 compression and Gumbel-Quantization.
|
28 |
+
See also our updated [reconstruction notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb).
|
29 |
+
- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](https://github.com/openai/DALL-E). See also [this section](#more-resources).
|
30 |
+
- We now include an overview of pretrained models in [Tab.1](#overview-of-pretrained-models). We added models for [COCO](#coco) and [ADE20k](#ade20k).
|
31 |
+
- The streamlit demo now supports image completions.
|
32 |
+
- We now include a couple of examples from the D-RIN dataset so you can run the
|
33 |
+
[D-RIN demo](#d-rin) without preparing the dataset first.
|
34 |
+
- You can now jump right into sampling with our [Colab quickstart notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb).
|
35 |
+
|
36 |
+
## Requirements
|
37 |
+
A suitable [conda](https://conda.io/) environment named `taming` can be created
|
38 |
+
and activated with:
|
39 |
+
|
40 |
+
```
|
41 |
+
conda env create -f environment.yaml
|
42 |
+
conda activate taming
|
43 |
+
```
|
44 |
+
## Overview of pretrained models
|
45 |
+
The following table provides an overview of all models that are currently available.
|
46 |
+
FID scores were evaluated using [torch-fidelity](https://github.com/toshas/torch-fidelity).
|
47 |
+
For reference, we also include a link to the recently released autoencoder of the [DALL-E](https://github.com/openai/DALL-E) model.
|
48 |
+
See the corresponding [colab
|
49 |
+
notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb)
|
50 |
+
for a comparison and discussion of reconstruction capabilities.
|
51 |
+
|
52 |
+
| Dataset | FID vs train | FID vs val | Link | Samples (256x256) | Comments
|
53 |
+
| ------------- | ------------- | ------------- |------------- | ------------- |------------- |
|
54 |
+
| FFHQ (f=16) | 9.6 | -- | [ffhq_transformer](https://k00.fr/yndvfu95) | [ffhq_samples](https://k00.fr/j626x093) |
|
55 |
+
| CelebA-HQ (f=16) | 10.2 | -- | [celebahq_transformer](https://k00.fr/2xkmielf) | [celebahq_samples](https://k00.fr/j626x093) |
|
56 |
+
| ADE20K (f=16) | -- | 35.5 | [ade20k_transformer](https://k00.fr/ot46cksa) | [ade20k_samples.zip](https://heibox.uni-heidelberg.de/f/70bb78cbaf844501b8fb/) [2k] | evaluated on val split (2k images)
|
57 |
+
| COCO-Stuff (f=16) | -- | 20.4 | [coco_transformer](https://k00.fr/2zz6i2ce) | [coco_samples.zip](https://heibox.uni-heidelberg.de/f/a395a9be612f4a7a8054/) [5k] | evaluated on val split (5k images)
|
58 |
+
| ImageNet (cIN) (f=16) | 15.98/15.78/6.59/5.88/5.20 | -- | [cin_transformer](https://k00.fr/s511rwcv) | [cin_samples](https://k00.fr/j626x093) | different decoding hyperparameters |
|
59 |
+
| | | | || |
|
60 |
+
| FacesHQ (f=16) | -- | -- | [faceshq_transformer](https://k00.fr/qqfl2do8)
|
61 |
+
| S-FLCKR (f=16) | -- | -- | [sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/)
|
62 |
+
| D-RIN (f=16) | -- | -- | [drin_transformer](https://k00.fr/39jcugc5)
|
63 |
+
| | | | | || |
|
64 |
+
| VQGAN ImageNet (f=16), 1024 | 10.54 | 7.94 | [vqgan_imagenet_f16_1024](https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
|
65 |
+
| VQGAN ImageNet (f=16), 16384 | 7.41 | 4.98 |[vqgan_imagenet_f16_16384](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
|
66 |
+
| VQGAN OpenImages (f=8), 8192, GumbelQuantization | 3.24 | 1.49 |[vqgan_gumbel_f8](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) | --- | Reconstruction-FIDs.
|
67 |
+
| | | | | || |
|
68 |
+
| DALL-E dVAE (f=8), 8192, GumbelQuantization | 33.88 | 32.01 | https://github.com/openai/DALL-E | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
|
69 |
+
|
70 |
+
|
71 |
+
## Running pretrained models
|
72 |
+
|
73 |
+
The commands below will start a streamlit demo which supports sampling at
|
74 |
+
different resolutions and image completions. To run a non-interactive version
|
75 |
+
of the sampling process, replace `streamlit run scripts/sample_conditional.py --`
|
76 |
+
by `python scripts/make_samples.py --outdir <path_to_write_samples_to>` and
|
77 |
+
keep the remaining command line arguments.
|
78 |
+
|
79 |
+
To sample from unconditional or class-conditional models,
|
80 |
+
run `python scripts/sample_fast.py -r <path/to/config_and_checkpoint>`.
|
81 |
+
We describe below how to use this script to sample from the ImageNet, FFHQ, and CelebA-HQ models,
|
82 |
+
respectively.
|
83 |
+
|
84 |
+
### S-FLCKR
|
85 |
+
![teaser](assets/sunset_and_ocean.jpg)
|
86 |
+
|
87 |
+
You can also [run this model in a Colab
|
88 |
+
notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb),
|
89 |
+
which includes all necessary steps to start sampling.
|
90 |
+
|
91 |
+
Download the
|
92 |
+
[2020-11-09T13-31-51_sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/)
|
93 |
+
folder and place it into `logs`. Then, run
|
94 |
+
```
|
95 |
+
streamlit run scripts/sample_conditional.py -- -r logs/2020-11-09T13-31-51_sflckr/
|
96 |
+
```
|
97 |
+
|
98 |
+
### ImageNet
|
99 |
+
![teaser](assets/imagenet.png)
|
100 |
+
|
101 |
+
Download the [2021-04-03T19-39-50_cin_transformer](https://k00.fr/s511rwcv)
|
102 |
+
folder and place it into logs. Sampling from the class-conditional ImageNet
|
103 |
+
model does not require any data preparation. To produce 50 samples for each of
|
104 |
+
the 1000 classes of ImageNet, with k=600 for top-k sampling, p=0.92 for nucleus
|
105 |
+
sampling and temperature t=1.0, run
|
106 |
+
|
107 |
+
```
|
108 |
+
python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25
|
109 |
+
```
|
110 |
+
|
111 |
+
To restrict the model to certain classes, provide them via the `--classes` argument, separated by
|
112 |
+
commas. For example, to sample 50 *ostriches*, *border collies* and *whiskey jugs*, run
|
113 |
+
|
114 |
+
```
|
115 |
+
python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25 --classes 9,232,901
|
116 |
+
```
|
117 |
+
We recommended to experiment with the autoregressive decoding parameters (top-k, top-p and temperature) for best results.
|
118 |
+
|
119 |
+
### FFHQ/CelebA-HQ
|
120 |
+
|
121 |
+
Download the [2021-04-23T18-19-01_ffhq_transformer](https://k00.fr/yndvfu95) and
|
122 |
+
[2021-04-23T18-11-19_celebahq_transformer](https://k00.fr/2xkmielf)
|
123 |
+
folders and place them into logs.
|
124 |
+
Again, sampling from these unconditional models does not require any data preparation.
|
125 |
+
To produce 50000 samples, with k=250 for top-k sampling,
|
126 |
+
p=1.0 for nucleus sampling and temperature t=1.0, run
|
127 |
+
|
128 |
+
```
|
129 |
+
python scripts/sample_fast.py -r logs/2021-04-23T18-19-01_ffhq_transformer/
|
130 |
+
```
|
131 |
+
for FFHQ and
|
132 |
+
|
133 |
+
```
|
134 |
+
python scripts/sample_fast.py -r logs/2021-04-23T18-11-19_celebahq_transformer/
|
135 |
+
```
|
136 |
+
to sample from the CelebA-HQ model.
|
137 |
+
For both models it can be advantageous to vary the top-k/top-p parameters for sampling.
|
138 |
+
|
139 |
+
### FacesHQ
|
140 |
+
![teaser](assets/faceshq.jpg)
|
141 |
+
|
142 |
+
Download [2020-11-13T21-41-45_faceshq_transformer](https://k00.fr/qqfl2do8) and
|
143 |
+
place it into `logs`. Follow the data preparation steps for
|
144 |
+
[CelebA-HQ](#celeba-hq) and [FFHQ](#ffhq). Run
|
145 |
+
```
|
146 |
+
streamlit run scripts/sample_conditional.py -- -r logs/2020-11-13T21-41-45_faceshq_transformer/
|
147 |
+
```
|
148 |
+
|
149 |
+
### D-RIN
|
150 |
+
![teaser](assets/drin.jpg)
|
151 |
+
|
152 |
+
Download [2020-11-20T12-54-32_drin_transformer](https://k00.fr/39jcugc5) and
|
153 |
+
place it into `logs`. To run the demo on a couple of example depth maps
|
154 |
+
included in the repository, run
|
155 |
+
|
156 |
+
```
|
157 |
+
streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.imagenet.DRINExamples}}}"
|
158 |
+
```
|
159 |
+
|
160 |
+
To run the demo on the complete validation set, first follow the data preparation steps for
|
161 |
+
[ImageNet](#imagenet) and then run
|
162 |
+
```
|
163 |
+
streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/
|
164 |
+
```
|
165 |
+
|
166 |
+
### COCO
|
167 |
+
Download [2021-01-20T16-04-20_coco_transformer](https://k00.fr/2zz6i2ce) and
|
168 |
+
place it into `logs`. To run the demo on a couple of example segmentation maps
|
169 |
+
included in the repository, run
|
170 |
+
|
171 |
+
```
|
172 |
+
streamlit run scripts/sample_conditional.py -- -r logs/2021-01-20T16-04-20_coco_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.coco.Examples}}}"
|
173 |
+
```
|
174 |
+
|
175 |
+
### ADE20k
|
176 |
+
Download [2020-11-20T21-45-44_ade20k_transformer](https://k00.fr/ot46cksa) and
|
177 |
+
place it into `logs`. To run the demo on a couple of example segmentation maps
|
178 |
+
included in the repository, run
|
179 |
+
|
180 |
+
```
|
181 |
+
streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T21-45-44_ade20k_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.ade20k.Examples}}}"
|
182 |
+
```
|
183 |
+
|
184 |
+
## Training on custom data
|
185 |
+
|
186 |
+
Training on your own dataset can be beneficial to get better tokens and hence better images for your domain.
|
187 |
+
Those are the steps to follow to make this work:
|
188 |
+
1. install the repo with `conda env create -f environment.yaml`, `conda activate taming` and `pip install -e .`
|
189 |
+
1. put your .jpg files in a folder `your_folder`
|
190 |
+
2. create 2 text files a `xx_train.txt` and `xx_test.txt` that point to the files in your training and test set respectively (for example `find $(pwd)/your_folder -name "*.jpg" > train.txt`)
|
191 |
+
3. adapt `configs/custom_vqgan.yaml` to point to these 2 files
|
192 |
+
4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1` to
|
193 |
+
train on two GPUs. Use `--gpus 0,` (with a trailing comma) to train on a single GPU.
|
194 |
+
|
195 |
+
## Data Preparation
|
196 |
+
|
197 |
+
### ImageNet
|
198 |
+
The code will try to download (through [Academic
|
199 |
+
Torrents](http://academictorrents.com/)) and prepare ImageNet the first time it
|
200 |
+
is used. However, since ImageNet is quite large, this requires a lot of disk
|
201 |
+
space and time. If you already have ImageNet on your disk, you can speed things
|
202 |
+
up by putting the data into
|
203 |
+
`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` (which defaults to
|
204 |
+
`~/.cache/autoencoders/data/ILSVRC2012_{split}/data/`), where `{split}` is one
|
205 |
+
of `train`/`validation`. It should have the following structure:
|
206 |
+
|
207 |
+
```
|
208 |
+
${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/
|
209 |
+
├── n01440764
|
210 |
+
│ ├── n01440764_10026.JPEG
|
211 |
+
│ ├── n01440764_10027.JPEG
|
212 |
+
│ ├── ...
|
213 |
+
├── n01443537
|
214 |
+
│ ├── n01443537_10007.JPEG
|
215 |
+
│ ├── n01443537_10014.JPEG
|
216 |
+
│ ├── ...
|
217 |
+
├── ...
|
218 |
+
```
|
219 |
+
|
220 |
+
If you haven't extracted the data, you can also place
|
221 |
+
`ILSVRC2012_img_train.tar`/`ILSVRC2012_img_val.tar` (or symlinks to them) into
|
222 |
+
`${XDG_CACHE}/autoencoders/data/ILSVRC2012_train/` /
|
223 |
+
`${XDG_CACHE}/autoencoders/data/ILSVRC2012_validation/`, which will then be
|
224 |
+
extracted into above structure without downloading it again. Note that this
|
225 |
+
will only happen if neither a folder
|
226 |
+
`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` nor a file
|
227 |
+
`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/.ready` exist. Remove them
|
228 |
+
if you want to force running the dataset preparation again.
|
229 |
+
|
230 |
+
You will then need to prepare the depth data using
|
231 |
+
[MiDaS](https://github.com/intel-isl/MiDaS). Create a symlink
|
232 |
+
`data/imagenet_depth` pointing to a folder with two subfolders `train` and
|
233 |
+
`val`, each mirroring the structure of the corresponding ImageNet folder
|
234 |
+
described above and containing a `png` file for each of ImageNet's `JPEG`
|
235 |
+
files. The `png` encodes `float32` depth values obtained from MiDaS as RGBA
|
236 |
+
images. We provide the script `scripts/extract_depth.py` to generate this data.
|
237 |
+
**Please note** that this script uses [MiDaS via PyTorch
|
238 |
+
Hub](https://pytorch.org/hub/intelisl_midas_v2/). When we prepared the data,
|
239 |
+
the hub provided the [MiDaS
|
240 |
+
v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2) version, but now it
|
241 |
+
provides a v2.1 version. We haven't tested our models with depth maps obtained
|
242 |
+
via v2.1 and if you want to make sure that things work as expected, you must
|
243 |
+
adjust the script to make sure it explicitly uses
|
244 |
+
[v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2)!
|
245 |
+
|
246 |
+
### CelebA-HQ
|
247 |
+
Create a symlink `data/celebahq` pointing to a folder containing the `.npy`
|
248 |
+
files of CelebA-HQ (instructions to obtain them can be found in the [PGGAN
|
249 |
+
repository](https://github.com/tkarras/progressive_growing_of_gans)).
|
250 |
+
|
251 |
+
### FFHQ
|
252 |
+
Create a symlink `data/ffhq` pointing to the `images1024x1024` folder obtained
|
253 |
+
from the [FFHQ repository](https://github.com/NVlabs/ffhq-dataset).
|
254 |
+
|
255 |
+
### S-FLCKR
|
256 |
+
Unfortunately, we are not allowed to distribute the images we collected for the
|
257 |
+
S-FLCKR dataset and can therefore only give a description how it was produced.
|
258 |
+
There are many resources on [collecting images from the
|
259 |
+
web](https://github.com/adrianmrit/flickrdatasets) to get started.
|
260 |
+
We collected sufficiently large images from [flickr](https://www.flickr.com)
|
261 |
+
(see `data/flickr_tags.txt` for a full list of tags used to find images)
|
262 |
+
and various [subreddits](https://www.reddit.com/r/sfwpornnetwork/wiki/network)
|
263 |
+
(see `data/subreddits.txt` for all subreddits that were used).
|
264 |
+
Overall, we collected 107625 images, and split them randomly into 96861
|
265 |
+
training images and 10764 validation images. We then obtained segmentation
|
266 |
+
masks for each image using [DeepLab v2](https://arxiv.org/abs/1606.00915)
|
267 |
+
trained on [COCO-Stuff](https://arxiv.org/abs/1612.03716). We used a [PyTorch
|
268 |
+
reimplementation](https://github.com/kazuto1011/deeplab-pytorch) and include an
|
269 |
+
example script for this process in `scripts/extract_segmentation.py`.
|
270 |
+
|
271 |
+
### COCO
|
272 |
+
Create a symlink `data/coco` containing the images from the 2017 split in
|
273 |
+
`train2017` and `val2017`, and their annotations in `annotations`. Files can be
|
274 |
+
obtained from the [COCO webpage](https://cocodataset.org/). In addition, we use
|
275 |
+
the [Stuff+thing PNG-style annotations on COCO 2017
|
276 |
+
trainval](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip)
|
277 |
+
annotations from [COCO-Stuff](https://github.com/nightrome/cocostuff), which
|
278 |
+
should be placed under `data/cocostuffthings`.
|
279 |
+
|
280 |
+
### ADE20k
|
281 |
+
Create a symlink `data/ade20k_root` containing the contents of
|
282 |
+
[ADEChallengeData2016.zip](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip)
|
283 |
+
from the [MIT Scene Parsing Benchmark](http://sceneparsing.csail.mit.edu/).
|
284 |
+
|
285 |
+
## Training models
|
286 |
+
|
287 |
+
### FacesHQ
|
288 |
+
|
289 |
+
Train a VQGAN with
|
290 |
+
```
|
291 |
+
python main.py --base configs/faceshq_vqgan.yaml -t True --gpus 0,
|
292 |
+
```
|
293 |
+
|
294 |
+
Then, adjust the checkpoint path of the config key
|
295 |
+
`model.params.first_stage_config.params.ckpt_path` in
|
296 |
+
`configs/faceshq_transformer.yaml` (or download
|
297 |
+
[2020-11-09T13-33-36_faceshq_vqgan](https://k00.fr/uxy5usa9) and place into `logs`, which
|
298 |
+
corresponds to the preconfigured checkpoint path), then run
|
299 |
+
```
|
300 |
+
python main.py --base configs/faceshq_transformer.yaml -t True --gpus 0,
|
301 |
+
```
|
302 |
+
|
303 |
+
### D-RIN
|
304 |
+
|
305 |
+
Train a VQGAN on ImageNet with
|
306 |
+
```
|
307 |
+
python main.py --base configs/imagenet_vqgan.yaml -t True --gpus 0,
|
308 |
+
```
|
309 |
+
|
310 |
+
or download a pretrained one from [2020-09-23T17-56-33_imagenet_vqgan](https://k00.fr/u0j2dtac)
|
311 |
+
and place under `logs`. If you trained your own, adjust the path in the config
|
312 |
+
key `model.params.first_stage_config.params.ckpt_path` of
|
313 |
+
`configs/drin_transformer.yaml`.
|
314 |
+
|
315 |
+
Train a VQGAN on Depth Maps of ImageNet with
|
316 |
+
```
|
317 |
+
python main.py --base configs/imagenetdepth_vqgan.yaml -t True --gpus 0,
|
318 |
+
```
|
319 |
+
|
320 |
+
or download a pretrained one from [2020-11-03T15-34-24_imagenetdepth_vqgan](https://k00.fr/55rlxs6i)
|
321 |
+
and place under `logs`. If you trained your own, adjust the path in the config
|
322 |
+
key `model.params.cond_stage_config.params.ckpt_path` of
|
323 |
+
`configs/drin_transformer.yaml`.
|
324 |
+
|
325 |
+
To train the transformer, run
|
326 |
+
```
|
327 |
+
python main.py --base configs/drin_transformer.yaml -t True --gpus 0,
|
328 |
+
```
|
329 |
+
|
330 |
+
## More Resources
|
331 |
+
### Comparing Different First Stage Models
|
332 |
+
The reconstruction and compression capabilities of different fist stage models can be analyzed in this [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb).
|
333 |
+
In particular, the notebook compares two VQGANs with a downsampling factor of f=16 for each and codebook dimensionality of 1024 and 16384,
|
334 |
+
a VQGAN with f=8 and 8192 codebook entries and the discrete autoencoder of OpenAI's [DALL-E](https://github.com/openai/DALL-E) (which has f=8 and 8192
|
335 |
+
codebook entries).
|
336 |
+
![firststages1](assets/first_stage_squirrels.png)
|
337 |
+
![firststages2](assets/first_stage_mushrooms.png)
|
338 |
+
|
339 |
+
### Other
|
340 |
+
- A [video summary](https://www.youtube.com/watch?v=o7dqGcLDf0A&feature=emb_imp_woyt) by [Two Minute Papers](https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg).
|
341 |
+
- A [video summary](https://www.youtube.com/watch?v=-wDSDtIAyWQ) by [Gradient Dude](https://www.youtube.com/c/GradientDude/about).
|
342 |
+
- A [weights and biases report summarizing the paper](https://wandb.ai/ayush-thakur/taming-transformer/reports/-Overview-Taming-Transformers-for-High-Resolution-Image-Synthesis---Vmlldzo0NjEyMTY)
|
343 |
+
by [ayulockin](https://github.com/ayulockin).
|
344 |
+
- A [video summary](https://www.youtube.com/watch?v=JfUTd8fjtX8&feature=emb_imp_woyt) by [What's AI](https://www.youtube.com/channel/UCUzGQrN-lyyc0BWTYoJM_Sg).
|
345 |
+
- Take a look at [ak9250's notebook](https://github.com/ak9250/taming-transformers/blob/master/tamingtransformerscolab.ipynb) if you want to run the streamlit demos on Colab.
|
346 |
+
|
347 |
+
### Text-to-Image Optimization via CLIP
|
348 |
+
VQGAN has been successfully used as an image generator guided by the [CLIP](https://github.com/openai/CLIP) model, both for pure image generation
|
349 |
+
from scratch and image-to-image translation. We recommend the following notebooks/videos/resources:
|
350 |
+
|
351 |
+
- [Advadnouns](https://twitter.com/advadnoun/status/1389316507134357506) Patreon and corresponding LatentVision notebooks: https://www.patreon.com/patronizeme
|
352 |
+
- The [notebook]( https://colab.research.google.com/drive/1L8oL-vLJXVcRzCFbPwOoMkPKJ8-aYdPN) of [Rivers Have Wings](https://twitter.com/RiversHaveWings).
|
353 |
+
- A [video](https://www.youtube.com/watch?v=90QDe6DQXF4&t=12s) explanation by [Dot CSV](https://www.youtube.com/channel/UCy5znSnfMsDwaLlROnZ7Qbg) (in Spanish, but English subtitles are available)
|
354 |
+
|
355 |
+
![txt2img](assets/birddrawnbyachild.png)
|
356 |
+
|
357 |
+
Text prompt: *'A bird drawn by a child'*
|
358 |
+
|
359 |
+
## Shout-outs
|
360 |
+
Thanks to everyone who makes their code and models available. In particular,
|
361 |
+
|
362 |
+
- The architecture of our VQGAN is inspired by [Denoising Diffusion Probabilistic Models](https://github.com/hojonathanho/diffusion)
|
363 |
+
- The very hackable transformer implementation [minGPT](https://github.com/karpathy/minGPT)
|
364 |
+
- The good ol' [PatchGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) and [Learned Perceptual Similarity (LPIPS)](https://github.com/richzhang/PerceptualSimilarity)
|
365 |
+
|
366 |
+
## BibTeX
|
367 |
+
|
368 |
+
```
|
369 |
+
@misc{esser2020taming,
|
370 |
+
title={Taming Transformers for High-Resolution Image Synthesis},
|
371 |
+
author={Patrick Esser and Robin Rombach and Björn Ommer},
|
372 |
+
year={2020},
|
373 |
+
eprint={2012.09841},
|
374 |
+
archivePrefix={arXiv},
|
375 |
+
primaryClass={cs.CV}
|
376 |
+
}
|
377 |
+
```
|
taming-transformers/assets/birddrawnbyachild.png
ADDED
taming-transformers/assets/drin.jpg
ADDED
taming-transformers/assets/faceshq.jpg
ADDED
taming-transformers/assets/first_stage_mushrooms.png
ADDED
taming-transformers/assets/first_stage_squirrels.png
ADDED
taming-transformers/assets/imagenet.png
ADDED
taming-transformers/assets/lake_in_the_mountains.png
ADDED
taming-transformers/assets/mountain.jpeg
ADDED
taming-transformers/assets/stormy.jpeg
ADDED
taming-transformers/assets/sunset_and_ocean.jpg
ADDED
taming-transformers/assets/teaser.png
ADDED
taming-transformers/configs/coco_cond_stage.yaml
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-06
|
3 |
+
target: taming.models.vqgan.VQSegmentationModel
|
4 |
+
params:
|
5 |
+
embed_dim: 256
|
6 |
+
n_embed: 1024
|
7 |
+
image_key: "segmentation"
|
8 |
+
n_labels: 183
|
9 |
+
ddconfig:
|
10 |
+
double_z: false
|
11 |
+
z_channels: 256
|
12 |
+
resolution: 256
|
13 |
+
in_channels: 183
|
14 |
+
out_ch: 183
|
15 |
+
ch: 128
|
16 |
+
ch_mult:
|
17 |
+
- 1
|
18 |
+
- 1
|
19 |
+
- 2
|
20 |
+
- 2
|
21 |
+
- 4
|
22 |
+
num_res_blocks: 2
|
23 |
+
attn_resolutions:
|
24 |
+
- 16
|
25 |
+
dropout: 0.0
|
26 |
+
|
27 |
+
lossconfig:
|
28 |
+
target: taming.modules.losses.segmentation.BCELossWithQuant
|
29 |
+
params:
|
30 |
+
codebook_weight: 1.0
|
31 |
+
|
32 |
+
data:
|
33 |
+
target: cutlit.DataModuleFromConfig
|
34 |
+
params:
|
35 |
+
batch_size: 12
|
36 |
+
train:
|
37 |
+
target: taming.data.coco.CocoImagesAndCaptionsTrain
|
38 |
+
params:
|
39 |
+
size: 296
|
40 |
+
crop_size: 256
|
41 |
+
onehot_segmentation: true
|
42 |
+
use_stuffthing: true
|
43 |
+
validation:
|
44 |
+
target: taming.data.coco.CocoImagesAndCaptionsTrain
|
45 |
+
params:
|
46 |
+
size: 256
|
47 |
+
crop_size: 256
|
48 |
+
onehot_segmentation: true
|
49 |
+
use_stuffthing: true
|
taming-transformers/configs/custom_vqgan.yaml
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-6
|
3 |
+
target: taming.models.vqgan.VQModel
|
4 |
+
params:
|
5 |
+
embed_dim: 256
|
6 |
+
n_embed: 1024
|
7 |
+
ddconfig:
|
8 |
+
double_z: False
|
9 |
+
z_channels: 256
|
10 |
+
resolution: 256
|
11 |
+
in_channels: 3
|
12 |
+
out_ch: 3
|
13 |
+
ch: 128
|
14 |
+
ch_mult: [ 1,1,2,2,4] # num_down = len(ch_mult)-1
|
15 |
+
num_res_blocks: 2
|
16 |
+
attn_resolutions: [16]
|
17 |
+
dropout: 0.0
|
18 |
+
|
19 |
+
lossconfig:
|
20 |
+
target: taming.modules.losses.vqperceptual.VQLPIPSWithDiscriminator
|
21 |
+
params:
|
22 |
+
disc_conditional: False
|
23 |
+
disc_in_channels: 3
|
24 |
+
disc_start: 10000
|
25 |
+
disc_weight: 0.8
|
26 |
+
codebook_weight: 1.0
|
27 |
+
|
28 |
+
data:
|
29 |
+
target: main.DataModuleFromConfig
|
30 |
+
params:
|
31 |
+
batch_size: 5
|
32 |
+
num_workers: 8
|
33 |
+
train:
|
34 |
+
target: taming.data.custom.CustomTrain
|
35 |
+
params:
|
36 |
+
training_images_list_file: some/training.txt
|
37 |
+
size: 256
|
38 |
+
validation:
|
39 |
+
target: taming.data.custom.CustomTest
|
40 |
+
params:
|
41 |
+
test_images_list_file: some/test.txt
|
42 |
+
size: 256
|
43 |
+
|
taming-transformers/configs/drin_transformer.yaml
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-06
|
3 |
+
target: taming.models.cond_transformer.Net2NetTransformer
|
4 |
+
params:
|
5 |
+
cond_stage_key: depth
|
6 |
+
transformer_config:
|
7 |
+
target: taming.modules.transformer.mingpt.GPT
|
8 |
+
params:
|
9 |
+
vocab_size: 1024
|
10 |
+
block_size: 512
|
11 |
+
n_layer: 24
|
12 |
+
n_head: 16
|
13 |
+
n_embd: 1024
|
14 |
+
first_stage_config:
|
15 |
+
target: taming.models.vqgan.VQModel
|
16 |
+
params:
|
17 |
+
ckpt_path: logs/2020-09-23T17-56-33_imagenet_vqgan/checkpoints/last.ckpt
|
18 |
+
embed_dim: 256
|
19 |
+
n_embed: 1024
|
20 |
+
ddconfig:
|
21 |
+
double_z: false
|
22 |
+
z_channels: 256
|
23 |
+
resolution: 256
|
24 |
+
in_channels: 3
|
25 |
+
out_ch: 3
|
26 |
+
ch: 128
|
27 |
+
ch_mult:
|
28 |
+
- 1
|
29 |
+
- 1
|
30 |
+
- 2
|
31 |
+
- 2
|
32 |
+
- 4
|
33 |
+
num_res_blocks: 2
|
34 |
+
attn_resolutions:
|
35 |
+
- 16
|
36 |
+
dropout: 0.0
|
37 |
+
lossconfig:
|
38 |
+
target: taming.modules.losses.DummyLoss
|
39 |
+
cond_stage_config:
|
40 |
+
target: taming.models.vqgan.VQModel
|
41 |
+
params:
|
42 |
+
ckpt_path: logs/2020-11-03T15-34-24_imagenetdepth_vqgan/checkpoints/last.ckpt
|
43 |
+
embed_dim: 256
|
44 |
+
n_embed: 1024
|
45 |
+
ddconfig:
|
46 |
+
double_z: false
|
47 |
+
z_channels: 256
|
48 |
+
resolution: 256
|
49 |
+
in_channels: 1
|
50 |
+
out_ch: 1
|
51 |
+
ch: 128
|
52 |
+
ch_mult:
|
53 |
+
- 1
|
54 |
+
- 1
|
55 |
+
- 2
|
56 |
+
- 2
|
57 |
+
- 4
|
58 |
+
num_res_blocks: 2
|
59 |
+
attn_resolutions:
|
60 |
+
- 16
|
61 |
+
dropout: 0.0
|
62 |
+
lossconfig:
|
63 |
+
target: taming.modules.losses.DummyLoss
|
64 |
+
|
65 |
+
data:
|
66 |
+
target: main.DataModuleFromConfig
|
67 |
+
params:
|
68 |
+
batch_size: 2
|
69 |
+
num_workers: 8
|
70 |
+
train:
|
71 |
+
target: taming.data.imagenet.RINTrainWithDepth
|
72 |
+
params:
|
73 |
+
size: 256
|
74 |
+
validation:
|
75 |
+
target: taming.data.imagenet.RINValidationWithDepth
|
76 |
+
params:
|
77 |
+
size: 256
|
taming-transformers/configs/faceshq_transformer.yaml
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-06
|
3 |
+
target: taming.models.cond_transformer.Net2NetTransformer
|
4 |
+
params:
|
5 |
+
cond_stage_key: coord
|
6 |
+
transformer_config:
|
7 |
+
target: taming.modules.transformer.mingpt.GPT
|
8 |
+
params:
|
9 |
+
vocab_size: 1024
|
10 |
+
block_size: 512
|
11 |
+
n_layer: 24
|
12 |
+
n_head: 16
|
13 |
+
n_embd: 1024
|
14 |
+
first_stage_config:
|
15 |
+
target: taming.models.vqgan.VQModel
|
16 |
+
params:
|
17 |
+
ckpt_path: logs/2020-11-09T13-33-36_faceshq_vqgan/checkpoints/last.ckpt
|
18 |
+
embed_dim: 256
|
19 |
+
n_embed: 1024
|
20 |
+
ddconfig:
|
21 |
+
double_z: false
|
22 |
+
z_channels: 256
|
23 |
+
resolution: 256
|
24 |
+
in_channels: 3
|
25 |
+
out_ch: 3
|
26 |
+
ch: 128
|
27 |
+
ch_mult:
|
28 |
+
- 1
|
29 |
+
- 1
|
30 |
+
- 2
|
31 |
+
- 2
|
32 |
+
- 4
|
33 |
+
num_res_blocks: 2
|
34 |
+
attn_resolutions:
|
35 |
+
- 16
|
36 |
+
dropout: 0.0
|
37 |
+
lossconfig:
|
38 |
+
target: taming.modules.losses.DummyLoss
|
39 |
+
cond_stage_config:
|
40 |
+
target: taming.modules.misc.coord.CoordStage
|
41 |
+
params:
|
42 |
+
n_embed: 1024
|
43 |
+
down_factor: 16
|
44 |
+
|
45 |
+
data:
|
46 |
+
target: main.DataModuleFromConfig
|
47 |
+
params:
|
48 |
+
batch_size: 2
|
49 |
+
num_workers: 8
|
50 |
+
train:
|
51 |
+
target: taming.data.faceshq.FacesHQTrain
|
52 |
+
params:
|
53 |
+
size: 256
|
54 |
+
crop_size: 256
|
55 |
+
coord: True
|
56 |
+
validation:
|
57 |
+
target: taming.data.faceshq.FacesHQValidation
|
58 |
+
params:
|
59 |
+
size: 256
|
60 |
+
crop_size: 256
|
61 |
+
coord: True
|
taming-transformers/configs/faceshq_vqgan.yaml
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-6
|
3 |
+
target: taming.models.vqgan.VQModel
|
4 |
+
params:
|
5 |
+
embed_dim: 256
|
6 |
+
n_embed: 1024
|
7 |
+
ddconfig:
|
8 |
+
double_z: False
|
9 |
+
z_channels: 256
|
10 |
+
resolution: 256
|
11 |
+
in_channels: 3
|
12 |
+
out_ch: 3
|
13 |
+
ch: 128
|
14 |
+
ch_mult: [ 1,1,2,2,4] # num_down = len(ch_mult)-1
|
15 |
+
num_res_blocks: 2
|
16 |
+
attn_resolutions: [16]
|
17 |
+
dropout: 0.0
|
18 |
+
|
19 |
+
lossconfig:
|
20 |
+
target: taming.modules.losses.vqperceptual.VQLPIPSWithDiscriminator
|
21 |
+
params:
|
22 |
+
disc_conditional: False
|
23 |
+
disc_in_channels: 3
|
24 |
+
disc_start: 30001
|
25 |
+
disc_weight: 0.8
|
26 |
+
codebook_weight: 1.0
|
27 |
+
|
28 |
+
data:
|
29 |
+
target: main.DataModuleFromConfig
|
30 |
+
params:
|
31 |
+
batch_size: 3
|
32 |
+
num_workers: 8
|
33 |
+
train:
|
34 |
+
target: taming.data.faceshq.FacesHQTrain
|
35 |
+
params:
|
36 |
+
size: 256
|
37 |
+
crop_size: 256
|
38 |
+
validation:
|
39 |
+
target: taming.data.faceshq.FacesHQValidation
|
40 |
+
params:
|
41 |
+
size: 256
|
42 |
+
crop_size: 256
|
taming-transformers/configs/imagenet_vqgan.yaml
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-6
|
3 |
+
target: taming.models.vqgan.VQModel
|
4 |
+
params:
|
5 |
+
embed_dim: 256
|
6 |
+
n_embed: 1024
|
7 |
+
ddconfig:
|
8 |
+
double_z: False
|
9 |
+
z_channels: 256
|
10 |
+
resolution: 256
|
11 |
+
in_channels: 3
|
12 |
+
out_ch: 3
|
13 |
+
ch: 128
|
14 |
+
ch_mult: [ 1,1,2,2,4] # num_down = len(ch_mult)-1
|
15 |
+
num_res_blocks: 2
|
16 |
+
attn_resolutions: [16]
|
17 |
+
dropout: 0.0
|
18 |
+
|
19 |
+
lossconfig:
|
20 |
+
target: taming.modules.losses.vqperceptual.VQLPIPSWithDiscriminator
|
21 |
+
params:
|
22 |
+
disc_conditional: False
|
23 |
+
disc_in_channels: 3
|
24 |
+
disc_start: 250001
|
25 |
+
disc_weight: 0.8
|
26 |
+
codebook_weight: 1.0
|
27 |
+
|
28 |
+
data:
|
29 |
+
target: main.DataModuleFromConfig
|
30 |
+
params:
|
31 |
+
batch_size: 12
|
32 |
+
num_workers: 24
|
33 |
+
train:
|
34 |
+
target: taming.data.imagenet.ImageNetTrain
|
35 |
+
params:
|
36 |
+
config:
|
37 |
+
size: 256
|
38 |
+
validation:
|
39 |
+
target: taming.data.imagenet.ImageNetValidation
|
40 |
+
params:
|
41 |
+
config:
|
42 |
+
size: 256
|
taming-transformers/configs/imagenetdepth_vqgan.yaml
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-6
|
3 |
+
target: taming.models.vqgan.VQModel
|
4 |
+
params:
|
5 |
+
embed_dim: 256
|
6 |
+
n_embed: 1024
|
7 |
+
image_key: depth
|
8 |
+
ddconfig:
|
9 |
+
double_z: False
|
10 |
+
z_channels: 256
|
11 |
+
resolution: 256
|
12 |
+
in_channels: 1
|
13 |
+
out_ch: 1
|
14 |
+
ch: 128
|
15 |
+
ch_mult: [ 1,1,2,2,4] # num_down = len(ch_mult)-1
|
16 |
+
num_res_blocks: 2
|
17 |
+
attn_resolutions: [16]
|
18 |
+
dropout: 0.0
|
19 |
+
|
20 |
+
lossconfig:
|
21 |
+
target: taming.modules.losses.vqperceptual.VQLPIPSWithDiscriminator
|
22 |
+
params:
|
23 |
+
disc_conditional: False
|
24 |
+
disc_in_channels: 1
|
25 |
+
disc_start: 50001
|
26 |
+
disc_weight: 0.75
|
27 |
+
codebook_weight: 1.0
|
28 |
+
|
29 |
+
data:
|
30 |
+
target: main.DataModuleFromConfig
|
31 |
+
params:
|
32 |
+
batch_size: 3
|
33 |
+
num_workers: 8
|
34 |
+
train:
|
35 |
+
target: taming.data.imagenet.ImageNetTrainWithDepth
|
36 |
+
params:
|
37 |
+
size: 256
|
38 |
+
validation:
|
39 |
+
target: taming.data.imagenet.ImageNetValidationWithDepth
|
40 |
+
params:
|
41 |
+
size: 256
|
taming-transformers/configs/sflckr_cond_stage.yaml
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model:
|
2 |
+
base_learning_rate: 4.5e-06
|
3 |
+
target: taming.models.vqgan.VQSegmentationModel
|
4 |
+
params:
|
5 |
+
embed_dim: 256
|
6 |
+
n_embed: 1024
|
7 |
+
image_key: "segmentation"
|
8 |
+
n_labels: 182
|
9 |
+
ddconfig:
|
10 |
+
double_z: false
|
11 |
+
z_channels: 256
|
12 |
+
resolution: 256
|
13 |
+
in_channels: 182
|
14 |
+
out_ch: 182
|
15 |
+
ch: 128
|
16 |
+
ch_mult:
|
17 |
+
- 1
|
18 |
+
- 1
|
19 |
+
- 2
|
20 |
+
- 2
|
21 |
+
- 4
|
22 |
+
num_res_blocks: 2
|
23 |
+
attn_resolutions:
|
24 |
+
- 16
|
25 |
+
dropout: 0.0
|
26 |
+
|
27 |
+
lossconfig:
|
28 |
+
target: taming.modules.losses.segmentation.BCELossWithQuant
|
29 |
+
params:
|
30 |
+
codebook_weight: 1.0
|
31 |
+
|
32 |
+
data:
|
33 |
+
target: cutlit.DataModuleFromConfig
|
34 |
+
params:
|
35 |
+
batch_size: 12
|
36 |
+
train:
|
37 |
+
target: taming.data.sflckr.Examples # adjust
|
38 |
+
params:
|
39 |
+
size: 256
|
40 |
+
validation:
|
41 |
+
target: taming.data.sflckr.Examples # adjust
|
42 |
+
params:
|
43 |
+
size: 256
|
taming-transformers/data/ade20k_examples.txt
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
ADE_val_00000636.jpg
|
2 |
+
ADE_val_00000126.jpg
|
3 |
+
ADE_val_00001412.jpg
|
4 |
+
ADE_val_00001845.jpg
|
5 |
+
ADE_val_00001200.jpg
|
6 |
+
ADE_val_00001578.jpg
|
7 |
+
ADE_val_00000880.jpg
|
8 |
+
ADE_val_00000875.jpg
|
9 |
+
ADE_val_00000123.jpg
|
10 |
+
ADE_val_00001209.jpg
|
11 |
+
ADE_val_00000203.jpg
|
12 |
+
ADE_val_00001851.jpg
|
13 |
+
ADE_val_00001583.jpg
|
14 |
+
ADE_val_00000287.jpg
|
15 |
+
ADE_val_00001947.jpg
|
16 |
+
ADE_val_00000262.jpg
|
17 |
+
ADE_val_00000603.jpg
|
18 |
+
ADE_val_00000125.jpg
|
19 |
+
ADE_val_00001698.jpg
|
20 |
+
ADE_val_00001966.jpg
|
21 |
+
ADE_val_00000532.jpg
|
22 |
+
ADE_val_00001177.jpg
|
23 |
+
ADE_val_00000734.jpg
|
24 |
+
ADE_val_00001498.jpg
|
25 |
+
ADE_val_00001766.jpg
|
26 |
+
ADE_val_00000303.jpg
|
27 |
+
ADE_val_00000509.jpg
|
28 |
+
ADE_val_00000573.jpg
|
29 |
+
ADE_val_00000289.jpg
|
30 |
+
ADE_val_00001388.jpg
|
taming-transformers/data/ade20k_images/ADE_val_00000123.jpg
ADDED
taming-transformers/data/ade20k_images/ADE_val_00000125.jpg
ADDED
taming-transformers/data/ade20k_images/ADE_val_00000126.jpg
ADDED
taming-transformers/data/ade20k_images/ADE_val_00000203.jpg
ADDED
taming-transformers/data/ade20k_images/ADE_val_00000262.jpg
ADDED
taming-transformers/data/ade20k_images/ADE_val_00000287.jpg
ADDED
taming-transformers/data/ade20k_images/ADE_val_00000289.jpg
ADDED
taming-transformers/data/ade20k_images/ADE_val_00000303.jpg
ADDED