Transformers
PyTorch
clip
Inference Endpoints
nllb-clip-base / README.md
visheratin's picture
Create README.md
35f7174
|
raw
history blame
1.01 kB
metadata
license: cc-by-nc-4.0
datasets:
  - visheratin/laion-coco-nllb

The code to run the model:

from transformers import AutoTokenizer, CLIPProcessor
import requests
from PIL import Image

from modeling_nllb_clip import NLLBCLIPModel # local file from the repo

processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
processor = processor.image_processor
tokenizer = AutoTokenizer.from_pretrained(
    "facebook/nllb-200-distilled-600M"
)
image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(image_path, stream=True).raw)
image_inputs = processor(images=image, return_tensors="pt")
text_inputs = tokenizer(
    ["cat", "dog", "butterfly"],
    padding="longest",
    return_tensors="pt",
)

hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-base")

outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)