license: apache-2.0 | |
# mlx-community/clip-vit-base-patch32 | |
This model was converted to MLX format from [`openai/clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32). | |
Refer to the [original model card](https://huggingface.co/openai/clip-vit-base-patch32) for more details on the model. | |
## Use with mlx | |
```bash | |
git clone https://github.com/ml-explore/mlx-examples.git | |
cd clip | |
pip install -r requirements.txt | |
``` | |
```python | |
from PIL import Image | |
import clip | |
model, tokenizer, img_processor = clip.load("mlx_model") | |
inputs = { | |
"input_ids": tokenizer(["a photo of a cat", "a photo of a dog"]), | |
"pixel_values": img_processor( | |
[Image.open("assets/cat.jpeg"), Image.open("assets/dog.jpeg")] | |
), | |
} | |
output = model(**inputs) | |
# Get text and image embeddings: | |
text_embeds = output.text_embeds | |
image_embeds = output.image_embeds | |
``` |