prasanna2003
commited on
Commit
•
61d9a5b
1
Parent(s):
acc2136
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- MMInstruction/M3IT
|
4 |
+
pipeline_tag: image-to-text
|
5 |
+
---
|
6 |
+
|
7 |
+
This model is fintuned on instruction dataset using `SalesForce/blip-imagecaptioning-base` model.
|
8 |
+
## Usage:
|
9 |
+
```
|
10 |
+
from transformers import BlipProcessor, BlipForConditionalGeneration
|
11 |
+
import torch
|
12 |
+
from PIL import Image
|
13 |
+
|
14 |
+
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
|
15 |
+
if processor.tokenizer.eos_token is None:
|
16 |
+
processor.tokenizer.eos_token = '<|eos|>'
|
17 |
+
model = BlipForConditionalGeneration.from_pretrained("prasanna2003/Instruct-blip-v2")
|
18 |
+
|
19 |
+
image = Image.open('file_name.jpg').convert('RGB')
|
20 |
+
|
21 |
+
prompt = """Instruction: Answer the following input according to the image.
|
22 |
+
Input: Describe this image.
|
23 |
+
output: """
|
24 |
+
|
25 |
+
inputs = processor(image, prompt, return_tensors="pt")
|
26 |
+
|
27 |
+
output = model.generate(**inputs, max_length=100)
|
28 |
+
print(tokenizer.decode(output[0]))
|
29 |
+
```
|