pszemraj commited on
Commit
a7be8e9
1 Parent(s): 886514c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -5,8 +5,33 @@ language:
5
  library_name: transformers
6
  inference: False
7
  ---
8
- ## Sharded BLIP-2 Model Card
9
 
10
- This is a sharded version of the [BLIP-2 Model Card](https://huggingface.co/models/Salesforce/blip2-flan-t5-xl) which leverages [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) for image-to-text tasks such as image captioning and visual question answering.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- Refer to the [original model card](https://huggingface.co/models/Salesforce/blip2-flan-t5-xl) for more details about the model description, intended uses, and limitations, as well as instructions for how to use the model on CPU and GPU in different precisions.
 
5
  library_name: transformers
6
  inference: False
7
  ---
8
+ ## Sharded BLIP-2 Model Card - flan-t5-xl
9
 
10
+ This is a sharded version of the [blip2-flan-t5-xl](https://huggingface.co/models/Salesforce/blip2-flan-t5-xl) which leverages [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) for image-to-text tasks such as image captioning and visual question answering.
11
+
12
+ Refer to the [original model card](https://huggingface.co/models/Salesforce/blip2-flan-t5-xl) for more details about the model description, intended uses, and limitations, as well as instructions for how to use the model on CPU and GPU in different precisions.
13
+
14
+ ## Usage
15
+
16
+ Refer to the original model card for details or see [this blog post](https://huggingface.co/blog/blip-2#using-blip-2-with-hugging-face-transformers). Here is how you can use it on CPU:
17
+
18
+
19
+ ```python
20
+ import requests
21
+ from PIL import Image
22
+ from transformers import BlipProcessor, Blip2ForConditionalGeneration
23
+
24
+ model_name = "Salesforce/blip2-flan-t5-xl")
25
+ processor = BlipProcessor.from_pretrained(model_name)
26
+ model = Blip2ForConditionalGeneration.from_pretrained(model_name)
27
+
28
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
29
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
30
+
31
+ question = "how many dogs are in the picture?"
32
+ inputs = processor(raw_image, question, return_tensors="pt")
33
+
34
+ out = model.generate(**inputs)
35
+ print(processor.decode(out[0], skip_special_tokens=True))
36
+ ```
37