merve HF staff commited on
Commit
77d6c57
1 Parent(s): 6c3bdcd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -1,3 +1,32 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # mlx-community/clip-vit-base-patch32
6
+ This model was converted to MLX format from [`openai/clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32).
7
+ Refer to the [original model card](https://huggingface.co/openai/clip-vit-base-patch32) for more details on the model.
8
+ ## Use with mlx
9
+
10
+ ```bash
11
+ git clone
12
+ cd clip
13
+ pip install -r requirements.txt
14
+ ```
15
+
16
+ ```python
17
+ from PIL import Image
18
+ import clip
19
+
20
+ model, tokenizer, img_processor = clip.load("mlx_model")
21
+ inputs = {
22
+ "input_ids": tokenizer(["a photo of a cat", "a photo of a dog"]),
23
+ "pixel_values": img_processor(
24
+ [Image.open("assets/cat.jpeg"), Image.open("assets/dog.jpeg")]
25
+ ),
26
+ }
27
+ output = model(**inputs)
28
+
29
+ # Get text and image embeddings:
30
+ text_embeds = output.text_embeds
31
+ image_embeds = output.image_embeds
32
+ ```