--- license: cc-by-nc-4.0 datasets: - visheratin/laion-coco-nllb --- ## Model Summary NLLB-CLIP is a model that combines a text encoder from the [NLLB model](https://huggingface.co/facebook/nllb-200-distilled-600M) and an image encoder from the standard [CLIP](https://huggingface.co/openai/clip-vit-base-patch32). This allows us to extend the model capabilities to 201 languages of the Flores-200. NLLB-CLIP sets state-of-the-art on the [Crossmodal-3600](https://google.github.io/crossmodal-3600/) dataset by performing very well on low-resource languages. You can find more details about the model in the [paper](https://arxiv.org/abs/2309.01859). ## How to use The model [repo](https://huggingface.co/visheratin/nllb-clip-base/tree/main) contains the model code files that allow the use of NLLB-CLIP as any other model from the hub. The interface is also compatible with CLIP models. Example code is below: ``` from transformers import AutoTokenizer, CLIPProcessor import requests from PIL import Image from modeling_nllb_clip import NLLBCLIPModel # local file from the repo processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") processor = processor.image_processor tokenizer = AutoTokenizer.from_pretrained( "facebook/nllb-200-distilled-600M" ) image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg" image = Image.open(requests.get(image_path, stream=True).raw) image_inputs = processor(images=image, return_tensors="pt") text_inputs = tokenizer( ["cat", "dog", "butterfly"], padding="longest", return_tensors="pt", ) hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-base") outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values) ```