Tiny CLIP

Introduction

This is a smaller version of CLIP trained for EN only. The training script can be found here. This model is roughly 8 times smaller than CLIP. This was achieved by using a small text model (microsoft/xtremedistil-l6-h256-uncased) and a small vision model (edgenext_small). For a in-depth guide of training CLIP see this blog.

Usage

For now this is the recommended way to use this model

git lfs install 
git clone https://huggingface.co/sachin/tiny_clip
cd tiny_clip

Once you are in the folder you could do the following:

import models
text_encoder, tokenizer, vision_encoder, transform = models.get_model()
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.