Update README.md
Browse files
README.md
CHANGED
@@ -7,21 +7,30 @@ inference: False
|
|
7 |
---
|
8 |
## Sharded BLIP-2 Model Card - flan-t5-xl
|
9 |
|
10 |
-
This is a sharded version of the [blip2-flan-t5-xl](https://huggingface.co/
|
11 |
|
12 |
-
|
|
|
13 |
|
14 |
## Usage
|
15 |
|
16 |
Refer to the original model card for details or see [this blog post](https://huggingface.co/blog/blip-2#using-blip-2-with-hugging-face-transformers). Here is how you can use it on CPU:
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
```python
|
20 |
import requests
|
21 |
from PIL import Image
|
22 |
from transformers import BlipProcessor, Blip2ForConditionalGeneration
|
23 |
|
24 |
-
model_name = "
|
25 |
processor = BlipProcessor.from_pretrained(model_name)
|
26 |
model = Blip2ForConditionalGeneration.from_pretrained(model_name)
|
27 |
|
|
|
7 |
---
|
8 |
## Sharded BLIP-2 Model Card - flan-t5-xl
|
9 |
|
10 |
+
This is a sharded version of the [blip2-flan-t5-xl](https://huggingface.co/Salesforce/blip2-flan-t5-xl) which leverages [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) for image-to-text tasks such as image captioning and visual question answering.
|
11 |
|
12 |
+
- this model repo is sharded so it can be easily loaded on low-RAM Colab runtimes :)
|
13 |
+
- Refer to the [original model card](https://huggingface.co/Salesforce/blip2-flan-t5-xl) for more details about the model description, intended uses, and limitations, as well as instructions for how to use the model on CPU and GPU in different precisions.
|
14 |
|
15 |
## Usage
|
16 |
|
17 |
Refer to the original model card for details or see [this blog post](https://huggingface.co/blog/blip-2#using-blip-2-with-hugging-face-transformers). Here is how you can use it on CPU:
|
18 |
|
19 |
+
Install
|
20 |
+
|
21 |
+
Requires the current `main` of transformers (_at time of writing_):
|
22 |
+
```bash
|
23 |
+
pip install accelerate git+https://github.com/huggingface/transformers.git -U -q
|
24 |
+
```
|
25 |
+
|
26 |
+
Use (_this is for CPU, check out the original model card/blog for `fp16` and `int8` usage_)
|
27 |
|
28 |
```python
|
29 |
import requests
|
30 |
from PIL import Image
|
31 |
from transformers import BlipProcessor, Blip2ForConditionalGeneration
|
32 |
|
33 |
+
model_name = "ethzanalytics/blip2-flan-t5-xl-sharded"
|
34 |
processor = BlipProcessor.from_pretrained(model_name)
|
35 |
model = Blip2ForConditionalGeneration.from_pretrained(model_name)
|
36 |
|