library_name: transformers
license: cc-by-nc-4.0
tags:
- xlm-roberta
- eva02
- clip
- feature-extraction
- sentence-similarity
- retrieval
- multimodal
- multi-modal
- crossmodal
- cross-modal
- mteb
- clip-benchmark
- vidore
- transformers
- sentence-transformers
- onnx
- safetensors
- transformers.js
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
inference: false
The embedding set trained by Jina AI.
Jina CLIP v2: Multilingual Multimodal Embeddings for Texts and Images
Quick Start
Blog | Azure | AWS SageMaker | Google Cloud Platform | API
Intended Usage & Model Info
jina-clip-v2
is a state-of-the-art multilingual and multimodal (text-image) embedding model. It is a successor to the jina-clip-v1
model and brings new features and capabilities, such as:
- support for multiple languages - the text tower is trained on 89 languages with tuning focus on Arabic, Bengali, Chinese, Danish, Dutch, English, Finnish, French, Georgian, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Latvian, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.
- embedding truncation on both image and text vectors - both towers are trained using Matryoshka Representation Learning which enables slicing the output vectors and consequently computation and storage costs.
- visual document retrieval performance gains - with an image resolution of 512 (compared to 224 on
jina-clip-v1
) the image tower can now capture finer visual details. This feature along with a more diverse training set enable the model to perform much better on visual document retrieval tasks. Due to thisjina-clip-v2
can be used as an image encoder in vLLM retriever architectures.
Similar to our predecessor model, jina-clip-v2
bridges the gap between text-to-text and cross-modal retrieval. Via a single vector space, jina-clip-v2
offers state-of-the-art performance on both tasks.
This dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, enabling seamless text-to-text and text-to-image searches within a single model.
Data & Parameters
Check out our paper. Updated technical report for v2 coming soon!
Usage
- The easiest way to start using jina-clip-v2 is via Jina AI's Embeddings API.
- Alternatively, you can use the model directly via the transformers/sentence-transformers package.
# !pip install transformers einops timm pillow
from transformers import AutoModel
# Initialize the model
model = AutoModel.from_pretrained("jinaai/jina-clip-v2", trust_remote_code=True)
# Corpus
sentences = [
"طاهٍ يطبخ المعكرونة في المطبخ", # Arabic
"厨师在厨房煮意大利面", # Chinese
"Un chef qui cuisine des pâtes dans la cuisine", # French
"Ein Koch, der in der Küche Pasta kocht", # German
"Ένας σεφ μαγειρεύει ζυμαρικά στην κουζίνα", # Greek
"एक शेफ रसोई में पास्ता पका रहा है", # Hindi
"Uno chef che cucina la pasta in cucina", # Italian
"シェフがキッチンでパスタを作っている", # Japanese
"셰프가 주방에서 파스타를 요리하고 있다", # Korean
]
# Public image URLs or Pil
image_urls = ["https://i.ibb.co/bRGGJxD/DALL-E-2024-11-20-13-44-46-A-highly-realistic-8-K-photographic-image-of-a-chef-cooking-pasta-in-a-mo.webp"]
# Choose a matryoshka dimension, set to None to get the full 1024-dim vectors
truncate_dim = 512
# Encode text and images
text_embeddings = model.encode_text(sentences, truncate_dim=truncate_dim)
image_embeddings = model.encode_image(
image_urls, truncate_dim=truncate_dim
) # also accepts PIL.image, local filenames, dataURI
# Encode query text
query = "A chef cooking pasta in the kitchen" # English
query_embeddings = model.encode_text(
query, task="retrieval.query", truncate_dim=truncate_dim
)
# text to image
print("En -> Img: " + str(query_embeddings @ image_embeddings[0].T))
# text to text
print("En -> Ar: " + str(query_embeddings @ text_embeddings[0].T))
print("En -> Zh: " + str(query_embeddings @ text_embeddings[1].T))
print("En -> Fr: " + str(query_embeddings @ text_embeddings[2].T))
print("En -> De: " + str(query_embeddings @ text_embeddings[3].T))
print("En -> Gr: " + str(query_embeddings @ text_embeddings[4].T))
print("En -> Hi: " + str(query_embeddings @ text_embeddings[5].T))
print("En -> It: " + str(query_embeddings @ text_embeddings[6].T))
print("En -> Jp: " + str(query_embeddings @ text_embeddings[7].T))
print("En -> Ko: " + str(query_embeddings @ text_embeddings[8].T))
or via sentence-transformers:
# !pip install sentence-transformers einops timm pillow
from sentence_transformers import SentenceTransformer
# Initialize the model
truncate_dim = 512
model = SentenceTransformer(
"jinaai/jina-clip-v2", trust_remote_code=True, truncate_dim=truncate_dim
)
# Corpus
sentences = [
"طاهٍ يطبخ المعكرونة في المطبخ", # Arabic
"厨师在厨房煮意大利面", # Chinese
"Un chef qui cuisine des pâtes dans la cuisine", # French
"Ein Koch, der in der Küche Pasta kocht", # German
"Ένας σεφ μαγειρεύει ζυμαρικά στην κουζίνα", # Greek
"एक शेफ रसोई में पास्ता पका रहा है", # Hindi
"Uno chef che cucina la pasta in cucina", # Italian
"シェフがキッチンでパスタを作っている", # Japanese
"셰프가 주방에서 파스타를 요리하고 있다", # Korean
]
# Public image URLs or Pil
image_urls = ["https://i.ibb.co/bRGGJxD/DALL-E-2024-11-20-13-44-46-A-highly-realistic-8-K-photographic-image-of-a-chef-cooking-pasta-in-a-mo.webp"]
text_embeddings = model.encode(sentences)
image_embeddings = model.encode(image_urls)
query = "A chef cooking pasta in the kitchen" # English
query_embeddings = model.encode(query)
Contact
Join our Discord community and chat with other community members about ideas.
Citation
If you find jina-clip-v2
useful in your research, please cite the following paper:
@misc{2405.20204,
Author = {Andreas Koukounas and Georgios Mastrapas and Michael Günther and Bo Wang and Scott Martens and Isabelle Mohr and Saba Sturua and Mohammad Kalim Akram and Joan Fontanals Martínez and Saahil Ognawala and Susana Guzman and Maximilian Werk and Nan Wang and Han Xiao},
Title = {Jina CLIP: Your CLIP Model Is Also Your Text Retriever},
Year = {2024},
Eprint = {arXiv:2405.20204},
}