Upload README.md
Browse files---
language:
- en
tags:
- zero-shot-image-classification
license: mit
datasets:
- coco2017
---
# CLIP-small
## Introduction
This is a smaller version of CLIP trained for EN only. The training script can be found [here](https://www.kaggle.com/code/sachin/tiny-en-clip/). This model is roughly 8 times smaller than CLIP. This was achieved by using a small text model (`microsoft/xtremedistil-l6-h256-uncased`) and a small vision model (`edgenext_small`). For a in-depth guide of training CLIP see [this blog](https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html).
## Usage
For now this is the recommended way to use this model
```
git lfs install
git clone https://huggingface.co/sachin/CLIP-small
cd CLIP-small
```
Once you are in the folder you could do the following:
```python
import models
text_encoder, tokenizer, vision_encoder, transform = models.get_model()
```
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- zero-shot-image-classification
|
6 |
+
license: mit
|
7 |
+
datasets:
|
8 |
+
- coco2017
|
9 |
+
---
|
10 |
+
|
11 |
+
# CLIP-small
|
12 |
+
## Introduction
|
13 |
+
This is a smaller version of CLIP trained for EN only. The training script can be found [here](https://www.kaggle.com/code/sachin/tiny-en-clip/). This model is roughly 8 times smaller than CLIP. This was achieved by using a small text model (`microsoft/xtremedistil-l6-h256-uncased`) and a small vision model (`edgenext_small`). For a in-depth guide of training CLIP see [this blog](https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html).
|
14 |
+
|
15 |
+
## Usage
|
16 |
+
For now this is the recommended way to use this model
|
17 |
+
```
|
18 |
+
git lfs install
|
19 |
+
git clone https://huggingface.co/sachin/CLIP-small
|
20 |
+
cd CLIP-small
|
21 |
+
```
|
22 |
+
Once you are in the folder you could do the following:
|
23 |
+
```python
|
24 |
+
import models
|
25 |
+
text_encoder, tokenizer, vision_encoder, transform = models.get_model()
|
26 |
+
```
|