--- license: cc-by-nc-4.0 tags: - feature-extraction - sentence-similarity - mteb language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh inference: false library_name: transformers ---

Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications.

The embedding set trained by Jina AI.

Jina Embedding V3: A Multilingual Multi-Task Embedding Model

## Quick Start The easiest way to starting using `jina-embeddings-v3` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v3` is a **multilingual multi-task text embedding model** designed for a variety of NLP applications. Based on the [XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-flash-implementation), this model supports [Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2104.09864) to handle long sequences up to **8192 tokens**. Additionally, it features [LoRA](https://arxiv.org/abs/2106.09685) adapters to generate task-specific embeddings efficiently. ### Key Features: - **Extended Sequence Length:** Supports up to 8192 tokens with RoPE. - **Task-Specific Embedding:** Customize embeddings through the `task_type` argument with the following options: - `retrieval.query`: Used for query embeddings in asymmetric retrieval tasks - `retrieval.passage`: Used for passage embeddings in asymmetric retrieval tasks - `separation`: Used for embeddings in clustering and re-ranking applications - `classification`: Used for embeddings in classification tasks - `text-matching`: Used for embeddings in tasks that quantify similarity between two texts, such as STS or symmetric retrieval tasks - **Matryoshka Embeddings**: Supports flexible embedding sizes (`32, 64, 128, 256, 512, 768, 1024`), allowing for truncating embeddings to fit your application. ### Model Lineage: `jina-embeddings-v3` builds upon the [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) model, which was originally trained on 100 languages. We extended its capabilities with an extra pretraining phase on the [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset, then contrastively fine-tuned it on 30 languages for enhanced performance on embedding tasks in both monolingual and cross-lingual setups. ### Supported Languages: While the base model supports 100 languages, we've focused our tuning efforts on the following 30 languages: **Arabic, Bengali, Chinese, Danish, Dutch, English, Finnish, French, Georgian, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Latvian, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu,** and **Vietnamese.** ## Data & Parameters The data and training details are described in the technical report (coming soon). ## Usage **
Apply mean pooling when integrating the model.**

### Why Use Mean Pooling? Mean pooling takes all token embeddings from the model's output and averages them at the sentence or paragraph level. This approach has been shown to produce high-quality sentence embeddings. We provide an `encode` function that handles this for you automatically. However, if you're working with the model directly, outside of the `encode` function, you'll need to apply mean pooling manually. Here's how you can do it: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v3') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v3', trust_remote_code=True) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ```

1. The easiest way to starting using jina-clip-v1-en is to use Jina AI's [Embeddings API](https://jina.ai/embeddings/). 2. Alternatively, you can use Jina CLIP directly via transformers package. ```python !pip install transformers einops flash_attn from transformers import AutoModel # Initialize the model model = AutoModel.from_pretrained('jinaai/jina-embeddings-v3', trust_remote_code=True) # New meaningful sentences sentences = [ "Organic skincare for sensitive skin with aloe vera and chamomile.", "New makeup trends focus on bold colors and innovative techniques", "Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille", "Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken", "Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla", "Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras", "针对敏感肌专门设计的天然有机护肤产品", "新的化妆趋势注重鲜艳的颜色和创新的技巧", "敏感肌のために特別に設計された天然有機スキンケア製品", "新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています", ] # Encode sentences embeddings = model.encode(sentences, truncate_dim=1024, task_type='index') # TODO UPDATE # Compute similarities print(embeddings[0] @ embeddings[1].T) ``` ## Performance TODO UPDATE THIS ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find `jina-embeddings-v3` useful in your research, please cite the following paper: ```bibtex ```