File size: 1,010 Bytes
35f7174 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: cc-by-nc-4.0
datasets:
- visheratin/laion-coco-nllb
---
The code to run the model:
```
from transformers import AutoTokenizer, CLIPProcessor
import requests
from PIL import Image
from modeling_nllb_clip import NLLBCLIPModel # local file from the repo
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
processor = processor.image_processor
tokenizer = AutoTokenizer.from_pretrained(
"facebook/nllb-200-distilled-600M"
)
image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(image_path, stream=True).raw)
image_inputs = processor(images=image, return_tensors="pt")
text_inputs = tokenizer(
["cat", "dog", "butterfly"],
padding="longest",
return_tensors="pt",
)
hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-base")
outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)
```
|