SeyedAli commited on
Commit
3a781a9
1 Parent(s): 1e59846

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -93,3 +93,22 @@ The following hyperparameters were used during training:
93
  - Pytorch 2.1.2+cu121
94
  - Datasets 2.10.1
95
  - Tokenizers 0.15.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  - Pytorch 2.1.2+cu121
94
  - Datasets 2.10.1
95
  - Tokenizers 0.15.0
96
+
97
+ ### How to use?
98
+ ```python
99
+ # Both models generate vectors with 768 dimensions.
100
+ from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor
101
+ # download pre-trained models
102
+ vision_encoder = CLIPVisionModel.from_pretrained('SeyedAli/persian-clip')
103
+ preprocessor = CLIPFeatureExtractor.from_pretrained('SeyedAli/persian-clip')
104
+ text_encoder = RobertaModel.from_pretrained('SeyedAli/persian-clip')
105
+ tokenizer = AutoTokenizer.from_pretrained('SeyedAli/persian-clip')
106
+ # define input image and input text
107
+ text = 'something'
108
+ image = PIL.Image.open('my_favorite_image.jpg')
109
+ # compute embeddings
110
+ text_embedding = text_encoder(**tokenizer(text,
111
+ return_tensors='pt')).pooler_output
112
+ image_embedding = vision_encoder(**preprocessor(image,
113
+ return_tensors='pt')).pooler_output
114
+ ```