modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
tohoku-nlp/bert-base-japanese-whole-word-masking | tohoku-nlp | "2024-02-22T00:57:37Z" | 283,403 | 55 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT base Japanese (IPA dictionary, whole word masking enabled)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For the training of the MLM (masked language modeling) objective, we introduced the **Whole Word Masking** in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
T-Systems-onsite/cross-en-de-roberta-sentence-transformer | T-Systems-onsite | "2024-04-07T21:18:12Z" | 283,157 | 54 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"search",
"roberta",
"xlm-r-distilroberta-base-paraphrase-v1",
"paraphrase",
"de",
"en",
"multilingual",
"dataset:stsb_multi_mt",
"arxiv:1908.10084",
"license:mit",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | ---
language:
- de
- en
- multilingual
license: mit
tags:
- sentence_embedding
- search
- pytorch
- xlm-roberta
- roberta
- xlm-r-distilroberta-base-paraphrase-v1
- paraphrase
datasets:
- stsb_multi_mt
metrics:
- Spearman’s rank correlation
- cosine similarity
---
# Cross English & German RoBERTa for Sentence Embeddings
This model is intended to [compute sentence (text) embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html) for English and German text. These embeddings can then be compared with [cosine-similarity](https://en.wikipedia.org/wiki/Cosine_similarity) to find sentences with a similar semantic meaning. For example this can be useful for [semantic textual similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html), [semantic search](https://www.sbert.net/docs/usage/semantic_search.html), or [paraphrase mining](https://www.sbert.net/docs/usage/paraphrase_mining.html). To do this you have to use the [Sentence Transformers Python framework](https://github.com/UKPLab/sentence-transformers).
The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and _multilingual finetuning with language-crossing_ we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below).
> Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT.
Source: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
This model is fine-tuned from [Philip May](https://may.la/) and open-sourced by [T-Systems-onsite](https://www.t-systems-onsite.de/). Special thanks to [Nils Reimers](https://www.nils-reimers.de/) for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub.
## How to use
To use this model install the `sentence-transformers` package (see here: <https://github.com/UKPLab/sentence-transformers>).
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer')
```
For details of usage and examples see here:
- [Computing Sentence Embeddings](https://www.sbert.net/docs/usage/computing_sentence_embeddings.html)
- [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html)
- [Paraphrase Mining](https://www.sbert.net/docs/usage/paraphrase_mining.html)
- [Semantic Search](https://www.sbert.net/docs/usage/semantic_search.html)
- [Cross-Encoders](https://www.sbert.net/docs/usage/cross-encoder.html)
- [Examples on GitHub](https://github.com/UKPLab/sentence-transformers/tree/master/examples)
## Training
The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). This model has been further trained by [Nils Reimers](https://www.nils-reimers.de/) on a large scale paraphrase dataset for 50+ languages. [Nils Reimers](https://www.nils-reimers.de/) about this [on GitHub](https://github.com/UKPLab/sentence-transformers/issues/509#issuecomment-712243280):
>A paper is upcoming for the paraphrase models.
>
>These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc.
>
>In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains.
>
>More details with the setup, all the datasets, and a wider evaluation will follow soon.
The resulting model called `xlm-r-distilroberta-base-paraphrase-v1` has been released here: <https://github.com/UKPLab/sentence-transformers/releases/tag/v0.3.8>
Building on this cross language model we fine-tuned it for English and German language on the [STSbenchmark](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) dataset. For German language we used the dataset of our [German STSbenchmark dataset](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark) which has been translated with [deepl.com](https://www.deepl.com/translator). Additionally to the German and English training samples we generated samples of English and German crossed. We call this _multilingual finetuning with language-crossing_. It doubled the traing-datasize and tests show that it further improves performance.
We did an automatic hyperparameter search for 33 trials with [Optuna](https://github.com/optuna/optuna). Using 10-fold crossvalidation on the deepl.com test and dev dataset we found the following best hyperparameters:
- batch_size = 8
- num_epochs = 2
- lr = 1.026343323298136e-05,
- eps = 4.462251033010287e-06
- weight_decay = 0.04794438776350409
- warmup_steps_proportion = 0.1609010732760181
The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing.
# Evaluation
The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on [Colab](https://colab.research.google.com/drive/1gtGnKq_dYU_sDYqMohTYVMVpxMJjyH0M?usp=sharing). As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels.
| Model Name | Spearman<br/>German | Spearman<br/>English | Spearman<br/>EN-DE & DE-EN<br/>(cross) |
|---------------------------------------------------------------|-------------------|--------------------|------------------|
| xlm-r-distilroberta-base-paraphrase-v1 | 0.8079 | 0.8350 | 0.7983 |
| [xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) | 0.7877 | 0.8465 | 0.7908 |
| xlm-r-bert-base-nli-stsb-mean-tokens | 0.7877 | 0.8465 | 0.7908 |
| [roberta-large-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/roberta-large-nli-stsb-mean-tokens) | 0.6371 | 0.8639 | 0.4109 |
| [T-Systems-onsite/<br/>german-roberta-sentence-transformer-v2](https://huggingface.co/T-Systems-onsite/german-roberta-sentence-transformer-v2) | 0.8529 | 0.8634 | 0.8415 |
| [paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8355 | **0.8682** | 0.8309 |
| **T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer** | **0.8550** | 0.8660 | **0.8525** |
## License
Copyright (c) 2020 [Philip May](https://philipmay.org), T-Systems on site services GmbH
Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer/blob/main/LICENSE) in the repository.
|
latent-consistency/lcm-lora-sdv1-5 | latent-consistency | "2023-11-16T16:01:30Z" | 282,334 | 433 | diffusers | [
"diffusers",
"lora",
"text-to-image",
"arxiv:2311.05556",
"base_model:runwayml/stable-diffusion-v1-5",
"license:openrail++",
"region:us"
] | text-to-image | "2023-11-07T11:20:24Z" | ---
library_name: diffusers
base_model: runwayml/stable-diffusion-v1-5
tags:
- lora
- text-to-image
license: openrail++
inference: false
---
# Latent Consistency Model (LCM) LoRA: SDv1-5
Latent Consistency Model (LCM) LoRA was proposed in [LCM-LoRA: A universal Stable-Diffusion Acceleration Module](https://arxiv.org/abs/2311.05556)
by *Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.*
It is a distilled consistency adapter for [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) that allows
to reduce the number of inference steps to only between **2 - 8 steps**.
| Model | Params / M |
|----------------------------------------------------------------------------|------------|
| [**lcm-lora-sdv1-5**](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5) | **67.5** |
| [lcm-lora-ssd-1b](https://huggingface.co/latent-consistency/lcm-lora-ssd-1b) | 105 |
| [lcm-lora-sdxl](https://huggingface.co/latent-consistency/lcm-lora-sdxl) | 197M |
## Usage
LCM-LoRA is supported in 🤗 Hugging Face Diffusers library from version v0.23.0 onwards. To run the model, first
install the latest version of the Diffusers library as well as `peft`, `accelerate` and `transformers`.
audio dataset from the Hugging Face Hub:
```bash
pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft
```
***Note: For detailed usage examples we recommend you to check out our official [LCM-LoRA docs](https://huggingface.co/docs/diffusers/main/en/using-diffusers/inference_with_lcm_lora)***
### Text-to-Image
The adapter can be loaded with SDv1-5 or deviratives. Here we use [`Lykon/dreamshaper-7`](https://huggingface.co/Lykon/dreamshaper-7). Next, the scheduler needs to be changed to [`LCMScheduler`](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler) and we can reduce the number of inference steps to just 2 to 8 steps.
Please make sure to either disable `guidance_scale` or use values between 1.0 and 2.0.
```python
import torch
from diffusers import LCMScheduler, AutoPipelineForText2Image
model_id = "Lykon/dreamshaper-7"
adapter_id = "latent-consistency/lcm-lora-sdv1-5"
pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
# load and fuse lcm lora
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# disable guidance_scale by passing 0
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]
```
![](./image.png)
### Image-to-Image
LCM-LoRA can be applied to image-to-image tasks too. Let's look at how we can perform image-to-image generation with LCMs. For this example we'll use the [dreamshaper-7](https://huggingface.co/Lykon/dreamshaper-7) model and the LCM-LoRA for `stable-diffusion-v1-5 `.
```python
import torch
from diffusers import AutoPipelineForImage2Image, LCMScheduler
from diffusers.utils import make_image_grid, load_image
pipe = AutoPipelineForImage2Image.from_pretrained(
"Lykon/dreamshaper-7",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipe.fuse_lora()
# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)
prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k"
# pass prompt and image to pipeline
generator = torch.manual_seed(0)
image = pipe(
prompt,
image=init_image,
num_inference_steps=4,
guidance_scale=1,
strength=0.6,
generator=generator
).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_sdv1-5_i2i.png)
### Inpainting
LCM-LoRA can be used for inpainting as well.
```python
import torch
from diffusers import AutoPipelineForInpainting, LCMScheduler
from diffusers.utils import load_image, make_image_grid
pipe = AutoPipelineForInpainting.from_pretrained(
"runwayml/stable-diffusion-inpainting",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipe.fuse_lora()
# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")
# generator = torch.Generator("cuda").manual_seed(92)
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
generator = torch.manual_seed(0)
image = pipe(
prompt=prompt,
image=init_image,
mask_image=mask_image,
generator=generator,
num_inference_steps=4,
guidance_scale=4,
).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_sdv1-5_inpainting.png)
### ControlNet
For this example, we'll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet.
```python
import torch
import cv2
import numpy as np
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler
from diffusers.utils import load_image
image = load_image(
"https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
).resize((512, 512))
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
torch_dtype=torch.float16,
safety_checker=None,
variant="fp16"
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
generator = torch.manual_seed(0)
image = pipe(
"the mona lisa",
image=canny_image,
num_inference_steps=4,
guidance_scale=1.5,
controlnet_conditioning_scale=0.8,
cross_attention_kwargs={"scale": 1},
generator=generator,
).images[0]
make_image_grid([canny_image, image], rows=1, cols=2)
```
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_sdv1-5_controlnet.png)
## Speed Benchmark
TODO
## Training
TODO |
prithivida/parrot_adequacy_model | prithivida | "2022-05-27T02:47:22Z" | 279,313 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-05-27T02:04:37Z" | ---
license: apache-2.0
---
Parrot
THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER
1. What is Parrot?
Parrot is a paraphrase-based utterance augmentation framework purpose-built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. Please refer to the GitHub page or The model card prithivida/parrot_paraphraser_on_T5 |
sentence-transformers/LaBSE | sentence-transformers | "2024-06-03T09:38:00Z" | 278,638 | 185 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"sentence-similarity",
"multilingual",
"af",
"sq",
"am",
"ar",
"hy",
"as",
"az",
"eu",
"be",
"bn",
"bs",
"bg",
"my",
"ca",
"ceb",
"zh",
"co",
"hr",
"cs",
"da",
"nl",
"en",
"eo",
"et",
"fi",
"fr",
"fy",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"ha",
"haw",
"he",
"hi",
"hmn",
"hu",
"is",
"ig",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"km",
"rw",
"ko",
"ku",
"ky",
"lo",
"la",
"lv",
"lt",
"lb",
"mk",
"mg",
"ms",
"ml",
"mt",
"mi",
"mr",
"mn",
"ne",
"no",
"ny",
"or",
"fa",
"pl",
"pt",
"pa",
"ro",
"ru",
"sm",
"gd",
"sr",
"st",
"sn",
"si",
"sk",
"sl",
"so",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"th",
"bo",
"tr",
"tk",
"ug",
"uk",
"ur",
"uz",
"vi",
"cy",
"wo",
"xh",
"yi",
"yo",
"zu",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
language:
- multilingual
- af
- sq
- am
- ar
- hy
- as
- az
- eu
- be
- bn
- bs
- bg
- my
- ca
- ceb
- zh
- co
- hr
- cs
- da
- nl
- en
- eo
- et
- fi
- fr
- fy
- gl
- ka
- de
- el
- gu
- ht
- ha
- haw
- he
- hi
- hmn
- hu
- is
- ig
- id
- ga
- it
- ja
- jv
- kn
- kk
- km
- rw
- ko
- ku
- ky
- lo
- la
- lv
- lt
- lb
- mk
- mg
- ms
- ml
- mt
- mi
- mr
- mn
- ne
- no
- ny
- or
- fa
- pl
- pt
- pa
- ro
- ru
- sm
- gd
- sr
- st
- sn
- si
- sk
- sl
- so
- es
- su
- sw
- sv
- tl
- tg
- ta
- tt
- te
- th
- bo
- tr
- tk
- ug
- uk
- ur
- uz
- vi
- cy
- wo
- xh
- yi
- yo
- zu
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
library_name: sentence-transformers
license: apache-2.0
---
# LaBSE
This is a port of the [LaBSE](https://tfhub.dev/google/LaBSE/1) model to PyTorch. It can be used to map 109 languages to a shared vector space.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/LaBSE')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/LaBSE)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
Have a look at [LaBSE](https://tfhub.dev/google/LaBSE/1) for the respective publication that describes LaBSE.
|
snrspeaks/KeyPhraseTransformer | snrspeaks | "2022-03-25T13:05:44Z" | 276,406 | 8 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-03-25T12:23:25Z" | ---
license: mit
---
|
amazon/chronos-t5-small | amazon | "2024-05-13T21:08:16Z" | 275,117 | 16 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"time series",
"forecasting",
"pretrained models",
"foundation models",
"time series foundation models",
"time-series",
"time-series-forecasting",
"arxiv:2403.07815",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | time-series-forecasting | "2024-02-21T10:06:21Z" | ---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Chronos-T5 (Small)
Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).
<p align="center">
<img src="figures/main-figure.png" width="100%">
<br />
<span>
Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
</span>
</p>
---
## Architecture
The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.
| Model | Parameters | Based on |
| ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
| [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
| [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
## Usage
To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running:
```
pip install git+https://github.com/amazon-science/chronos-forecasting.git
```
A minimal example showing how to perform inference using Chronos models:
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from chronos import ChronosPipeline
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-small",
device_map="cuda",
torch_dtype=torch.bfloat16,
)
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
context = torch.tensor(df["#Passengers"])
prediction_length = 12
forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length]
# visualize the forecast
forecast_index = range(len(df), len(df) + prediction_length)
low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0)
plt.figure(figsize=(8, 4))
plt.plot(df["#Passengers"], color="royalblue", label="historical data")
plt.plot(forecast_index, median, color="tomato", label="median forecast")
plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval")
plt.legend()
plt.grid()
plt.show()
```
## Citation
If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
```
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License.
|
SanctumAI/Meta-Llama-3-8B-Instruct-GGUF | SanctumAI | "2024-05-29T00:29:38Z" | 273,062 | 24 | transformers | [
"transformers",
"gguf",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"en",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-22T15:21:47Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6516c820cb45675045da65db/KCM8BE64_gafrfai3SOik.png)
*This model was quantized by [SanctumAI](https://sanctum.ai). To leave feedback, join our community in [Discord](https://discord.gg/7ZNE78HJKh).*
# Meta Llama 3 8B Instruct GGUF
**Model creator:** [meta-llama](https://huggingface.co/meta-llama)<br>
**Original model**: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)<br>
## Model Summary:
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
## Prompt Template:
If you're using Sanctum app, simply use `Llama 3` model preset.
Prompt template:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Hardware Requirements Estimate
| Name | Quant method | Size | Memory (RAM, vRAM) required |
| ---- | ---- | ---- | ---- |
| [meta-llama-3-8b-instruct.Q2_K.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q2_K.gguf) | Q2_K | 3.18 GB | 7.20 GB |
| [meta-llama-3-8b-instruct.Q3_K_S.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q3_K_S.gguf) | Q3_K_S | 3.67 GB | 7.65 GB |
| [meta-llama-3-8b-instruct.Q3_K_M.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q3_K_M.gguf) | Q3_K_M | 4.02 GB | 7.98 GB |
| [meta-llama-3-8b-instruct.Q3_K_L.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q3_K_L.gguf) | Q3_K_L | 4.32 GB | 8.27 GB |
| [meta-llama-3-8b-instruct.Q4_0.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q4_0.gguf) | Q4_0 | 4.66 GB | 8.58 GB |
| [meta-llama-3-8b-instruct.Q4_K_S.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q4_K_S.gguf) | Q4_K_S | 4.69 GB | 8.61 GB |
| [meta-llama-3-8b-instruct.Q4_K_M.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q4_K_M.gguf) | Q4_K_M | 4.92 GB | 8.82 GB |
| [meta-llama-3-8b-instruct.Q4_K.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q4_K.gguf) | Q4_K | 4.92 GB | 8.82 GB |
| [meta-llama-3-8b-instruct.Q4_1.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q4_1.gguf) | Q4_1 | 5.13 GB | 9.02 GB |
| [meta-llama-3-8b-instruct.Q5_0.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q5_0.gguf) | Q5_0 | 5.60 GB | 9.46 GB |
| [meta-llama-3-8b-instruct.Q5_K_S.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q5_K_S.gguf) | Q5_K_S | 5.60 GB | 9.46 GB |
| [meta-llama-3-8b-instruct.Q5_K_M.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q5_K_M.gguf) | Q5_K_M | 5.73 GB | 9.58 GB |
| [meta-llama-3-8b-instruct.Q5_K.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q5_K.gguf) | Q5_K | 5.73 GB | 9.58 GB |
| [meta-llama-3-8b-instruct.Q5_1.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q5_1.gguf) | Q5_1 | 6.07 GB | 9.89 GB |
| [meta-llama-3-8b-instruct.Q6_K.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q6_K.gguf) | Q6_K | 6.60 GB | 10.38 GB |
| [meta-llama-3-8b-instruct.Q8_0.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.Q8_0.gguf) | Q8_0 | 8.54 GB | 12.19 GB |
| [meta-llama-3-8b-instruct.f16.gguf](https://huggingface.co/SanctumAI/Meta-Llama-3-8B-Instruct-GGUF/blob/main/meta-llama-3-8b-instruct.f16.gguf) | f16 | 16.07 GB | 19.21 GB |
## Disclaimer
Sanctum is not the creator, originator, or owner of any Model featured in the Models section of the Sanctum application. Each Model is created and provided by third parties. Sanctum does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Model listed there. You understand that supported Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Model is the sole responsibility of the person or entity who originated such Model. Sanctum may not monitor or control the Models supported and cannot, and does not, take responsibility for any such Model. Sanctum disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Models. Sanctum further disclaims any warranty that the Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Models, your downloading of any Model, or use of any other Model provided by or through Sanctum.
|
microsoft/speecht5_hifigan | microsoft | "2023-02-02T13:08:06Z" | 271,265 | 14 | transformers | [
"transformers",
"pytorch",
"hifigan",
"audio",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2023-02-02T13:06:10Z" | ---
license: mit
tags:
- audio
---
# SpeechT5 HiFi-GAN Vocoder
This is the HiFi-GAN vocoder for use with the SpeechT5 text-to-speech and voice conversion models.
SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-tts). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE).
Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Citation
**BibTeX:**
```bibtex
@inproceedings{ao-etal-2022-speecht5,
title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing},
author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {May},
year = {2022},
pages={5723--5738},
}
```
|
google/gemma-7b-it | google | "2024-06-27T14:09:41Z" | 270,986 | 1,101 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-13T01:07:30Z" | ---
library_name: transformers
license: gemma
tags: []
widget:
- messages:
- role: user
content: How does the brain work?
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-7b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **45.0** | **56.9** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
Helsinki-NLP/opus-mt-mul-en | Helsinki-NLP | "2023-08-16T12:01:25Z" | 269,809 | 57 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"es",
"os",
"eo",
"ro",
"fy",
"cy",
"is",
"lb",
"su",
"an",
"sq",
"fr",
"ht",
"rm",
"cv",
"ig",
"am",
"eu",
"tr",
"ps",
"af",
"ny",
"ch",
"uk",
"sl",
"lt",
"tk",
"sg",
"ar",
"lg",
"bg",
"be",
"ka",
"gd",
"ja",
"si",
"br",
"mh",
"km",
"th",
"ty",
"rw",
"te",
"mk",
"or",
"wo",
"kl",
"mr",
"ru",
"yo",
"hu",
"fo",
"zh",
"ti",
"co",
"ee",
"oc",
"sn",
"mt",
"ts",
"pl",
"gl",
"nb",
"bn",
"tt",
"bo",
"lo",
"id",
"gn",
"nv",
"hy",
"kn",
"to",
"io",
"so",
"vi",
"da",
"fj",
"gv",
"sm",
"nl",
"mi",
"pt",
"hi",
"se",
"as",
"ta",
"et",
"kw",
"ga",
"sv",
"ln",
"na",
"mn",
"gu",
"wa",
"lv",
"jv",
"el",
"my",
"ba",
"it",
"hr",
"ur",
"ce",
"nn",
"fi",
"mg",
"rn",
"xh",
"ab",
"de",
"cs",
"he",
"zu",
"yi",
"ml",
"mul",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- ca
- es
- os
- eo
- ro
- fy
- cy
- is
- lb
- su
- an
- sq
- fr
- ht
- rm
- cv
- ig
- am
- eu
- tr
- ps
- af
- ny
- ch
- uk
- sl
- lt
- tk
- sg
- ar
- lg
- bg
- be
- ka
- gd
- ja
- si
- br
- mh
- km
- th
- ty
- rw
- te
- mk
- or
- wo
- kl
- mr
- ru
- yo
- hu
- fo
- zh
- ti
- co
- ee
- oc
- sn
- mt
- ts
- pl
- gl
- nb
- bn
- tt
- bo
- lo
- id
- gn
- nv
- hy
- kn
- to
- io
- so
- vi
- da
- fj
- gv
- sm
- nl
- mi
- pt
- hi
- se
- as
- ta
- et
- kw
- ga
- sv
- ln
- na
- mn
- gu
- wa
- lv
- jv
- el
- my
- ba
- it
- hr
- ur
- ce
- nn
- fi
- mg
- rn
- xh
- ab
- de
- cs
- he
- zu
- yi
- ml
- mul
- en
tags:
- translation
license: apache-2.0
---
### mul-eng
* source group: Multiple languages
* target group: English
* OPUS readme: [mul-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md)
* model: transformer
* source language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.5 | 0.341 |
| newsdev2015-enfi-fineng.fin.eng | 16.8 | 0.441 |
| newsdev2016-enro-roneng.ron.eng | 31.3 | 0.580 |
| newsdev2016-entr-tureng.tur.eng | 16.4 | 0.422 |
| newsdev2017-enlv-laveng.lav.eng | 21.3 | 0.502 |
| newsdev2017-enzh-zhoeng.zho.eng | 12.7 | 0.409 |
| newsdev2018-enet-esteng.est.eng | 19.8 | 0.467 |
| newsdev2019-engu-gujeng.guj.eng | 13.3 | 0.385 |
| newsdev2019-enlt-liteng.lit.eng | 19.9 | 0.482 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 26.7 | 0.520 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 29.8 | 0.541 |
| newssyscomb2009-ceseng.ces.eng | 21.1 | 0.487 |
| newssyscomb2009-deueng.deu.eng | 22.6 | 0.499 |
| newssyscomb2009-fraeng.fra.eng | 25.8 | 0.530 |
| newssyscomb2009-huneng.hun.eng | 15.1 | 0.430 |
| newssyscomb2009-itaeng.ita.eng | 29.4 | 0.555 |
| newssyscomb2009-spaeng.spa.eng | 26.1 | 0.534 |
| news-test2008-deueng.deu.eng | 21.6 | 0.491 |
| news-test2008-fraeng.fra.eng | 22.3 | 0.502 |
| news-test2008-spaeng.spa.eng | 23.6 | 0.514 |
| newstest2009-ceseng.ces.eng | 19.8 | 0.480 |
| newstest2009-deueng.deu.eng | 20.9 | 0.487 |
| newstest2009-fraeng.fra.eng | 25.0 | 0.523 |
| newstest2009-huneng.hun.eng | 14.7 | 0.425 |
| newstest2009-itaeng.ita.eng | 27.6 | 0.542 |
| newstest2009-spaeng.spa.eng | 25.7 | 0.530 |
| newstest2010-ceseng.ces.eng | 20.6 | 0.491 |
| newstest2010-deueng.deu.eng | 23.4 | 0.517 |
| newstest2010-fraeng.fra.eng | 26.1 | 0.537 |
| newstest2010-spaeng.spa.eng | 29.1 | 0.561 |
| newstest2011-ceseng.ces.eng | 21.0 | 0.489 |
| newstest2011-deueng.deu.eng | 21.3 | 0.494 |
| newstest2011-fraeng.fra.eng | 26.8 | 0.546 |
| newstest2011-spaeng.spa.eng | 28.2 | 0.549 |
| newstest2012-ceseng.ces.eng | 20.5 | 0.485 |
| newstest2012-deueng.deu.eng | 22.3 | 0.503 |
| newstest2012-fraeng.fra.eng | 27.5 | 0.545 |
| newstest2012-ruseng.rus.eng | 26.6 | 0.532 |
| newstest2012-spaeng.spa.eng | 30.3 | 0.567 |
| newstest2013-ceseng.ces.eng | 22.5 | 0.498 |
| newstest2013-deueng.deu.eng | 25.0 | 0.518 |
| newstest2013-fraeng.fra.eng | 27.4 | 0.537 |
| newstest2013-ruseng.rus.eng | 21.6 | 0.484 |
| newstest2013-spaeng.spa.eng | 28.4 | 0.555 |
| newstest2014-csen-ceseng.ces.eng | 24.0 | 0.517 |
| newstest2014-deen-deueng.deu.eng | 24.1 | 0.511 |
| newstest2014-fren-fraeng.fra.eng | 29.1 | 0.563 |
| newstest2014-hien-hineng.hin.eng | 14.0 | 0.414 |
| newstest2014-ruen-ruseng.rus.eng | 24.0 | 0.521 |
| newstest2015-encs-ceseng.ces.eng | 21.9 | 0.481 |
| newstest2015-ende-deueng.deu.eng | 25.5 | 0.519 |
| newstest2015-enfi-fineng.fin.eng | 17.4 | 0.441 |
| newstest2015-enru-ruseng.rus.eng | 22.4 | 0.494 |
| newstest2016-encs-ceseng.ces.eng | 23.0 | 0.500 |
| newstest2016-ende-deueng.deu.eng | 30.1 | 0.560 |
| newstest2016-enfi-fineng.fin.eng | 18.5 | 0.461 |
| newstest2016-enro-roneng.ron.eng | 29.6 | 0.562 |
| newstest2016-enru-ruseng.rus.eng | 22.0 | 0.495 |
| newstest2016-entr-tureng.tur.eng | 14.8 | 0.415 |
| newstest2017-encs-ceseng.ces.eng | 20.2 | 0.475 |
| newstest2017-ende-deueng.deu.eng | 26.0 | 0.523 |
| newstest2017-enfi-fineng.fin.eng | 19.6 | 0.465 |
| newstest2017-enlv-laveng.lav.eng | 16.2 | 0.454 |
| newstest2017-enru-ruseng.rus.eng | 24.2 | 0.510 |
| newstest2017-entr-tureng.tur.eng | 15.0 | 0.412 |
| newstest2017-enzh-zhoeng.zho.eng | 13.7 | 0.412 |
| newstest2018-encs-ceseng.ces.eng | 21.2 | 0.486 |
| newstest2018-ende-deueng.deu.eng | 31.5 | 0.564 |
| newstest2018-enet-esteng.est.eng | 19.7 | 0.473 |
| newstest2018-enfi-fineng.fin.eng | 15.1 | 0.418 |
| newstest2018-enru-ruseng.rus.eng | 21.3 | 0.490 |
| newstest2018-entr-tureng.tur.eng | 15.4 | 0.421 |
| newstest2018-enzh-zhoeng.zho.eng | 12.9 | 0.408 |
| newstest2019-deen-deueng.deu.eng | 27.0 | 0.529 |
| newstest2019-fien-fineng.fin.eng | 17.2 | 0.438 |
| newstest2019-guen-gujeng.guj.eng | 9.0 | 0.342 |
| newstest2019-lten-liteng.lit.eng | 22.6 | 0.512 |
| newstest2019-ruen-ruseng.rus.eng | 24.1 | 0.503 |
| newstest2019-zhen-zhoeng.zho.eng | 13.9 | 0.427 |
| newstestB2016-enfi-fineng.fin.eng | 15.2 | 0.428 |
| newstestB2017-enfi-fineng.fin.eng | 16.8 | 0.442 |
| newstestB2017-fien-fineng.fin.eng | 16.8 | 0.442 |
| Tatoeba-test.abk-eng.abk.eng | 2.4 | 0.190 |
| Tatoeba-test.ady-eng.ady.eng | 1.1 | 0.111 |
| Tatoeba-test.afh-eng.afh.eng | 1.7 | 0.108 |
| Tatoeba-test.afr-eng.afr.eng | 53.0 | 0.672 |
| Tatoeba-test.akl-eng.akl.eng | 5.9 | 0.239 |
| Tatoeba-test.amh-eng.amh.eng | 25.6 | 0.464 |
| Tatoeba-test.ang-eng.ang.eng | 11.7 | 0.289 |
| Tatoeba-test.ara-eng.ara.eng | 26.4 | 0.443 |
| Tatoeba-test.arg-eng.arg.eng | 35.9 | 0.473 |
| Tatoeba-test.asm-eng.asm.eng | 19.8 | 0.365 |
| Tatoeba-test.ast-eng.ast.eng | 31.8 | 0.467 |
| Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.119 |
| Tatoeba-test.awa-eng.awa.eng | 9.7 | 0.271 |
| Tatoeba-test.aze-eng.aze.eng | 37.0 | 0.542 |
| Tatoeba-test.bak-eng.bak.eng | 13.9 | 0.395 |
| Tatoeba-test.bam-eng.bam.eng | 2.2 | 0.094 |
| Tatoeba-test.bel-eng.bel.eng | 36.8 | 0.549 |
| Tatoeba-test.ben-eng.ben.eng | 39.7 | 0.546 |
| Tatoeba-test.bho-eng.bho.eng | 33.6 | 0.540 |
| Tatoeba-test.bod-eng.bod.eng | 1.1 | 0.147 |
| Tatoeba-test.bre-eng.bre.eng | 14.2 | 0.303 |
| Tatoeba-test.brx-eng.brx.eng | 1.7 | 0.130 |
| Tatoeba-test.bul-eng.bul.eng | 46.0 | 0.621 |
| Tatoeba-test.cat-eng.cat.eng | 46.6 | 0.636 |
| Tatoeba-test.ceb-eng.ceb.eng | 17.4 | 0.347 |
| Tatoeba-test.ces-eng.ces.eng | 41.3 | 0.586 |
| Tatoeba-test.cha-eng.cha.eng | 7.9 | 0.232 |
| Tatoeba-test.che-eng.che.eng | 0.7 | 0.104 |
| Tatoeba-test.chm-eng.chm.eng | 7.3 | 0.261 |
| Tatoeba-test.chr-eng.chr.eng | 8.8 | 0.244 |
| Tatoeba-test.chv-eng.chv.eng | 11.0 | 0.319 |
| Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.204 |
| Tatoeba-test.cos-eng.cos.eng | 58.2 | 0.643 |
| Tatoeba-test.crh-eng.crh.eng | 26.3 | 0.399 |
| Tatoeba-test.csb-eng.csb.eng | 18.8 | 0.389 |
| Tatoeba-test.cym-eng.cym.eng | 23.4 | 0.407 |
| Tatoeba-test.dan-eng.dan.eng | 50.5 | 0.659 |
| Tatoeba-test.deu-eng.deu.eng | 39.6 | 0.579 |
| Tatoeba-test.dsb-eng.dsb.eng | 24.3 | 0.449 |
| Tatoeba-test.dtp-eng.dtp.eng | 1.0 | 0.149 |
| Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.061 |
| Tatoeba-test.egl-eng.egl.eng | 7.6 | 0.236 |
| Tatoeba-test.ell-eng.ell.eng | 55.4 | 0.682 |
| Tatoeba-test.enm-eng.enm.eng | 28.0 | 0.489 |
| Tatoeba-test.epo-eng.epo.eng | 41.8 | 0.591 |
| Tatoeba-test.est-eng.est.eng | 41.5 | 0.581 |
| Tatoeba-test.eus-eng.eus.eng | 37.8 | 0.557 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.7 | 0.262 |
| Tatoeba-test.ext-eng.ext.eng | 25.5 | 0.405 |
| Tatoeba-test.fao-eng.fao.eng | 28.7 | 0.469 |
| Tatoeba-test.fas-eng.fas.eng | 7.5 | 0.281 |
| Tatoeba-test.fij-eng.fij.eng | 24.2 | 0.320 |
| Tatoeba-test.fin-eng.fin.eng | 35.8 | 0.534 |
| Tatoeba-test.fkv-eng.fkv.eng | 15.5 | 0.434 |
| Tatoeba-test.fra-eng.fra.eng | 45.1 | 0.618 |
| Tatoeba-test.frm-eng.frm.eng | 29.6 | 0.427 |
| Tatoeba-test.frr-eng.frr.eng | 5.5 | 0.138 |
| Tatoeba-test.fry-eng.fry.eng | 25.3 | 0.455 |
| Tatoeba-test.ful-eng.ful.eng | 1.1 | 0.127 |
| Tatoeba-test.gcf-eng.gcf.eng | 16.0 | 0.315 |
| Tatoeba-test.gil-eng.gil.eng | 46.7 | 0.587 |
| Tatoeba-test.gla-eng.gla.eng | 20.2 | 0.358 |
| Tatoeba-test.gle-eng.gle.eng | 43.9 | 0.592 |
| Tatoeba-test.glg-eng.glg.eng | 45.1 | 0.623 |
| Tatoeba-test.glv-eng.glv.eng | 3.3 | 0.119 |
| Tatoeba-test.gos-eng.gos.eng | 20.1 | 0.364 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.041 |
| Tatoeba-test.grc-eng.grc.eng | 2.1 | 0.137 |
| Tatoeba-test.grn-eng.grn.eng | 1.7 | 0.152 |
| Tatoeba-test.gsw-eng.gsw.eng | 18.2 | 0.334 |
| Tatoeba-test.guj-eng.guj.eng | 21.7 | 0.373 |
| Tatoeba-test.hat-eng.hat.eng | 34.5 | 0.502 |
| Tatoeba-test.hau-eng.hau.eng | 10.5 | 0.295 |
| Tatoeba-test.haw-eng.haw.eng | 2.8 | 0.160 |
| Tatoeba-test.hbs-eng.hbs.eng | 46.7 | 0.623 |
| Tatoeba-test.heb-eng.heb.eng | 33.0 | 0.492 |
| Tatoeba-test.hif-eng.hif.eng | 17.0 | 0.391 |
| Tatoeba-test.hil-eng.hil.eng | 16.0 | 0.339 |
| Tatoeba-test.hin-eng.hin.eng | 36.4 | 0.533 |
| Tatoeba-test.hmn-eng.hmn.eng | 0.4 | 0.131 |
| Tatoeba-test.hoc-eng.hoc.eng | 0.7 | 0.132 |
| Tatoeba-test.hsb-eng.hsb.eng | 41.9 | 0.551 |
| Tatoeba-test.hun-eng.hun.eng | 33.2 | 0.510 |
| Tatoeba-test.hye-eng.hye.eng | 32.2 | 0.487 |
| Tatoeba-test.iba-eng.iba.eng | 9.4 | 0.278 |
| Tatoeba-test.ibo-eng.ibo.eng | 5.8 | 0.200 |
| Tatoeba-test.ido-eng.ido.eng | 31.7 | 0.503 |
| Tatoeba-test.iku-eng.iku.eng | 9.1 | 0.164 |
| Tatoeba-test.ile-eng.ile.eng | 42.2 | 0.595 |
| Tatoeba-test.ilo-eng.ilo.eng | 29.7 | 0.485 |
| Tatoeba-test.ina-eng.ina.eng | 42.1 | 0.607 |
| Tatoeba-test.isl-eng.isl.eng | 35.7 | 0.527 |
| Tatoeba-test.ita-eng.ita.eng | 54.8 | 0.686 |
| Tatoeba-test.izh-eng.izh.eng | 28.3 | 0.526 |
| Tatoeba-test.jav-eng.jav.eng | 10.0 | 0.282 |
| Tatoeba-test.jbo-eng.jbo.eng | 0.3 | 0.115 |
| Tatoeba-test.jdt-eng.jdt.eng | 5.3 | 0.140 |
| Tatoeba-test.jpn-eng.jpn.eng | 18.8 | 0.387 |
| Tatoeba-test.kab-eng.kab.eng | 3.9 | 0.205 |
| Tatoeba-test.kal-eng.kal.eng | 16.9 | 0.329 |
| Tatoeba-test.kan-eng.kan.eng | 16.2 | 0.374 |
| Tatoeba-test.kat-eng.kat.eng | 31.1 | 0.493 |
| Tatoeba-test.kaz-eng.kaz.eng | 24.5 | 0.437 |
| Tatoeba-test.kek-eng.kek.eng | 7.4 | 0.192 |
| Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.154 |
| Tatoeba-test.khm-eng.khm.eng | 12.2 | 0.290 |
| Tatoeba-test.kin-eng.kin.eng | 22.5 | 0.355 |
| Tatoeba-test.kir-eng.kir.eng | 27.2 | 0.470 |
| Tatoeba-test.kjh-eng.kjh.eng | 2.1 | 0.129 |
| Tatoeba-test.kok-eng.kok.eng | 4.5 | 0.259 |
| Tatoeba-test.kom-eng.kom.eng | 1.4 | 0.099 |
| Tatoeba-test.krl-eng.krl.eng | 26.1 | 0.387 |
| Tatoeba-test.ksh-eng.ksh.eng | 5.5 | 0.256 |
| Tatoeba-test.kum-eng.kum.eng | 9.3 | 0.288 |
| Tatoeba-test.kur-eng.kur.eng | 9.6 | 0.208 |
| Tatoeba-test.lad-eng.lad.eng | 30.1 | 0.475 |
| Tatoeba-test.lah-eng.lah.eng | 11.6 | 0.284 |
| Tatoeba-test.lao-eng.lao.eng | 4.5 | 0.214 |
| Tatoeba-test.lat-eng.lat.eng | 21.5 | 0.402 |
| Tatoeba-test.lav-eng.lav.eng | 40.2 | 0.577 |
| Tatoeba-test.ldn-eng.ldn.eng | 0.8 | 0.115 |
| Tatoeba-test.lfn-eng.lfn.eng | 23.0 | 0.433 |
| Tatoeba-test.lij-eng.lij.eng | 9.3 | 0.287 |
| Tatoeba-test.lin-eng.lin.eng | 2.4 | 0.196 |
| Tatoeba-test.lit-eng.lit.eng | 44.0 | 0.597 |
| Tatoeba-test.liv-eng.liv.eng | 1.6 | 0.115 |
| Tatoeba-test.lkt-eng.lkt.eng | 2.0 | 0.113 |
| Tatoeba-test.lld-eng.lld.eng | 18.3 | 0.312 |
| Tatoeba-test.lmo-eng.lmo.eng | 25.4 | 0.395 |
| Tatoeba-test.ltz-eng.ltz.eng | 35.9 | 0.509 |
| Tatoeba-test.lug-eng.lug.eng | 5.1 | 0.357 |
| Tatoeba-test.mad-eng.mad.eng | 2.8 | 0.123 |
| Tatoeba-test.mah-eng.mah.eng | 5.7 | 0.175 |
| Tatoeba-test.mai-eng.mai.eng | 56.3 | 0.703 |
| Tatoeba-test.mal-eng.mal.eng | 37.5 | 0.534 |
| Tatoeba-test.mar-eng.mar.eng | 22.8 | 0.470 |
| Tatoeba-test.mdf-eng.mdf.eng | 2.0 | 0.110 |
| Tatoeba-test.mfe-eng.mfe.eng | 59.2 | 0.764 |
| Tatoeba-test.mic-eng.mic.eng | 9.0 | 0.199 |
| Tatoeba-test.mkd-eng.mkd.eng | 44.3 | 0.593 |
| Tatoeba-test.mlg-eng.mlg.eng | 31.9 | 0.424 |
| Tatoeba-test.mlt-eng.mlt.eng | 38.6 | 0.540 |
| Tatoeba-test.mnw-eng.mnw.eng | 2.5 | 0.101 |
| Tatoeba-test.moh-eng.moh.eng | 0.3 | 0.110 |
| Tatoeba-test.mon-eng.mon.eng | 13.5 | 0.334 |
| Tatoeba-test.mri-eng.mri.eng | 8.5 | 0.260 |
| Tatoeba-test.msa-eng.msa.eng | 33.9 | 0.520 |
| Tatoeba-test.multi.eng | 34.7 | 0.518 |
| Tatoeba-test.mwl-eng.mwl.eng | 37.4 | 0.630 |
| Tatoeba-test.mya-eng.mya.eng | 15.5 | 0.335 |
| Tatoeba-test.myv-eng.myv.eng | 0.8 | 0.118 |
| Tatoeba-test.nau-eng.nau.eng | 9.0 | 0.186 |
| Tatoeba-test.nav-eng.nav.eng | 1.3 | 0.144 |
| Tatoeba-test.nds-eng.nds.eng | 30.7 | 0.495 |
| Tatoeba-test.nep-eng.nep.eng | 3.5 | 0.168 |
| Tatoeba-test.niu-eng.niu.eng | 42.7 | 0.492 |
| Tatoeba-test.nld-eng.nld.eng | 47.9 | 0.640 |
| Tatoeba-test.nog-eng.nog.eng | 12.7 | 0.284 |
| Tatoeba-test.non-eng.non.eng | 43.8 | 0.586 |
| Tatoeba-test.nor-eng.nor.eng | 45.5 | 0.619 |
| Tatoeba-test.nov-eng.nov.eng | 26.9 | 0.472 |
| Tatoeba-test.nya-eng.nya.eng | 33.2 | 0.456 |
| Tatoeba-test.oci-eng.oci.eng | 17.9 | 0.370 |
| Tatoeba-test.ori-eng.ori.eng | 14.6 | 0.305 |
| Tatoeba-test.orv-eng.orv.eng | 11.0 | 0.283 |
| Tatoeba-test.oss-eng.oss.eng | 4.1 | 0.211 |
| Tatoeba-test.ota-eng.ota.eng | 4.1 | 0.216 |
| Tatoeba-test.pag-eng.pag.eng | 24.3 | 0.468 |
| Tatoeba-test.pan-eng.pan.eng | 16.4 | 0.358 |
| Tatoeba-test.pap-eng.pap.eng | 53.2 | 0.628 |
| Tatoeba-test.pau-eng.pau.eng | 3.7 | 0.173 |
| Tatoeba-test.pdc-eng.pdc.eng | 45.3 | 0.569 |
| Tatoeba-test.pms-eng.pms.eng | 14.0 | 0.345 |
| Tatoeba-test.pol-eng.pol.eng | 41.7 | 0.588 |
| Tatoeba-test.por-eng.por.eng | 51.4 | 0.669 |
| Tatoeba-test.ppl-eng.ppl.eng | 0.4 | 0.134 |
| Tatoeba-test.prg-eng.prg.eng | 4.1 | 0.198 |
| Tatoeba-test.pus-eng.pus.eng | 6.7 | 0.233 |
| Tatoeba-test.quc-eng.quc.eng | 3.5 | 0.091 |
| Tatoeba-test.qya-eng.qya.eng | 0.2 | 0.090 |
| Tatoeba-test.rap-eng.rap.eng | 17.5 | 0.230 |
| Tatoeba-test.rif-eng.rif.eng | 4.2 | 0.164 |
| Tatoeba-test.roh-eng.roh.eng | 24.6 | 0.464 |
| Tatoeba-test.rom-eng.rom.eng | 3.4 | 0.212 |
| Tatoeba-test.ron-eng.ron.eng | 45.2 | 0.620 |
| Tatoeba-test.rue-eng.rue.eng | 21.4 | 0.390 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.392 |
| Tatoeba-test.rus-eng.rus.eng | 42.7 | 0.591 |
| Tatoeba-test.sag-eng.sag.eng | 3.4 | 0.187 |
| Tatoeba-test.sah-eng.sah.eng | 5.0 | 0.177 |
| Tatoeba-test.san-eng.san.eng | 2.0 | 0.172 |
| Tatoeba-test.scn-eng.scn.eng | 35.8 | 0.410 |
| Tatoeba-test.sco-eng.sco.eng | 34.6 | 0.520 |
| Tatoeba-test.sgs-eng.sgs.eng | 21.8 | 0.299 |
| Tatoeba-test.shs-eng.shs.eng | 1.8 | 0.122 |
| Tatoeba-test.shy-eng.shy.eng | 1.4 | 0.104 |
| Tatoeba-test.sin-eng.sin.eng | 20.6 | 0.429 |
| Tatoeba-test.sjn-eng.sjn.eng | 1.2 | 0.095 |
| Tatoeba-test.slv-eng.slv.eng | 37.0 | 0.545 |
| Tatoeba-test.sma-eng.sma.eng | 4.4 | 0.147 |
| Tatoeba-test.sme-eng.sme.eng | 8.9 | 0.229 |
| Tatoeba-test.smo-eng.smo.eng | 37.7 | 0.483 |
| Tatoeba-test.sna-eng.sna.eng | 18.0 | 0.359 |
| Tatoeba-test.snd-eng.snd.eng | 28.1 | 0.444 |
| Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 |
| Tatoeba-test.spa-eng.spa.eng | 47.9 | 0.645 |
| Tatoeba-test.sqi-eng.sqi.eng | 46.9 | 0.634 |
| Tatoeba-test.stq-eng.stq.eng | 8.1 | 0.379 |
| Tatoeba-test.sun-eng.sun.eng | 23.8 | 0.369 |
| Tatoeba-test.swa-eng.swa.eng | 6.5 | 0.193 |
| Tatoeba-test.swe-eng.swe.eng | 51.4 | 0.655 |
| Tatoeba-test.swg-eng.swg.eng | 18.5 | 0.342 |
| Tatoeba-test.tah-eng.tah.eng | 25.6 | 0.249 |
| Tatoeba-test.tam-eng.tam.eng | 29.1 | 0.437 |
| Tatoeba-test.tat-eng.tat.eng | 12.9 | 0.327 |
| Tatoeba-test.tel-eng.tel.eng | 21.2 | 0.386 |
| Tatoeba-test.tet-eng.tet.eng | 9.2 | 0.215 |
| Tatoeba-test.tgk-eng.tgk.eng | 12.7 | 0.374 |
| Tatoeba-test.tha-eng.tha.eng | 36.3 | 0.531 |
| Tatoeba-test.tir-eng.tir.eng | 9.1 | 0.267 |
| Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.084 |
| Tatoeba-test.tly-eng.tly.eng | 2.1 | 0.128 |
| Tatoeba-test.toi-eng.toi.eng | 5.3 | 0.150 |
| Tatoeba-test.ton-eng.ton.eng | 39.5 | 0.473 |
| Tatoeba-test.tpw-eng.tpw.eng | 1.5 | 0.160 |
| Tatoeba-test.tso-eng.tso.eng | 44.7 | 0.526 |
| Tatoeba-test.tuk-eng.tuk.eng | 18.6 | 0.401 |
| Tatoeba-test.tur-eng.tur.eng | 40.5 | 0.573 |
| Tatoeba-test.tvl-eng.tvl.eng | 55.0 | 0.593 |
| Tatoeba-test.tyv-eng.tyv.eng | 19.1 | 0.477 |
| Tatoeba-test.tzl-eng.tzl.eng | 17.7 | 0.333 |
| Tatoeba-test.udm-eng.udm.eng | 3.4 | 0.217 |
| Tatoeba-test.uig-eng.uig.eng | 11.4 | 0.289 |
| Tatoeba-test.ukr-eng.ukr.eng | 43.1 | 0.595 |
| Tatoeba-test.umb-eng.umb.eng | 9.2 | 0.260 |
| Tatoeba-test.urd-eng.urd.eng | 23.2 | 0.426 |
| Tatoeba-test.uzb-eng.uzb.eng | 19.0 | 0.342 |
| Tatoeba-test.vec-eng.vec.eng | 41.1 | 0.409 |
| Tatoeba-test.vie-eng.vie.eng | 30.6 | 0.481 |
| Tatoeba-test.vol-eng.vol.eng | 1.8 | 0.143 |
| Tatoeba-test.war-eng.war.eng | 15.9 | 0.352 |
| Tatoeba-test.wln-eng.wln.eng | 12.6 | 0.291 |
| Tatoeba-test.wol-eng.wol.eng | 4.4 | 0.138 |
| Tatoeba-test.xal-eng.xal.eng | 0.9 | 0.153 |
| Tatoeba-test.xho-eng.xho.eng | 35.4 | 0.513 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.387 |
| Tatoeba-test.yor-eng.yor.eng | 19.3 | 0.327 |
| Tatoeba-test.zho-eng.zho.eng | 25.8 | 0.448 |
| Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.567 |
| Tatoeba-test.zza-eng.zza.eng | 1.6 | 0.125 |
### System Info:
- hf_name: mul-eng
- source_languages: mul
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en']
- src_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt
- src_alpha3: mul
- tgt_alpha3: eng
- short_pair: mul-en
- chrF2_score: 0.518
- bleu: 34.7
- brevity_penalty: 1.0
- ref_len: 72346.0
- src_name: Multiple languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: mul
- tgt_alpha2: en
- prefer_old: False
- long_pair: mul-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
flair/ner-german-large | flair | "2022-08-28T09:08:06Z" | 269,793 | 37 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"dataset:conll2003",
"arxiv:2011.06993",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
datasets:
- conll2003
widget:
- text: "George Washington ging nach Washington"
---
## German NER in Flair (large model)
This is the large 4-class NER model for German that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,31** (CoNLL-03 German revised)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-german-large")
# make example sentence
sentence = Sentence("George Washington ging nach Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
import torch
# 1. get the corpus
from flair.datasets import CONLL_03_GERMAN
corpus = CONLL_03_GERMAN()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-german-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
facebook/vit-mae-base | facebook | "2024-03-13T07:48:29Z" | 265,510 | 27 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"vit_mae",
"pretraining",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (base-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/vit-mae-base')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-base')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
textattack/bert-base-uncased-MNLI | textattack | "2021-05-20T07:31:58Z" | 263,510 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | Entry not found |
sshleifer/tiny-marian-en-de | sshleifer | "2020-06-25T02:27:15Z" | 262,689 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | Entry not found |
Salesforce/instructblip-vicuna-7b | Salesforce | "2024-04-12T11:23:54Z" | 262,065 | 74 | transformers | [
"transformers",
"pytorch",
"safetensors",
"instructblip",
"text2text-generation",
"vision",
"image-captioning",
"image-to-text",
"en",
"arxiv:2305.06500",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-to-text | "2023-05-22T19:28:03Z" | ---
language: en
license: other
tags:
- vision
- image-captioning
pipeline_tag: image-to-text
---
# InstructBLIP model
InstructBLIP model using [Vicuna-7b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al.
Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details.
![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg)
## Intended uses & limitations
Usage is as follows:
```
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
import requests
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b")
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
prompt = "What is unusual about this image?"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=5,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print(generated_text)
```
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip). |
michellejieli/emotion_text_classifier | michellejieli | "2023-05-03T00:39:47Z" | 261,565 | 56 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"distilroberta",
"sentiment",
"emotion",
"twitter",
"reddit",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-10-22T22:44:07Z" | ---
language: "en"
tags:
- distilroberta
- sentiment
- emotion
- twitter
- reddit
widget:
- text: "Oh my God, he's lost it. He's totally lost it."
- text: "What?"
- text: "Wow, congratulations! So excited for you!"
---
# Fine-tuned DistilRoBERTa-base for Emotion Classification 🤬🤢😀😐😭😲
# Model Description
DistilRoBERTa-base is a transformer model that performs sentiment analysis. I fine-tuned the model on transcripts from the Friends show with the goal of classifying emotions from text data, specifically dialogue from Netflix shows or movies. The model predicts 6 Ekman emotions and a neutral class. These emotions include anger, disgust, fear, joy, neutrality, sadness, and surprise.
The model is a fine-tuned version of [Emotion English DistilRoBERTa-base](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/) and [DistilRoBERTa-base](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base). This model was initially trained on the following table from [Emotion English DistilRoBERTa-base](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/):
|Name|anger|disgust|fear|joy|neutral|sadness|surprise|
|---|---|---|---|---|---|---|---|
|Crowdflower (2016)|Yes|-|-|Yes|Yes|Yes|Yes|
|Emotion Dataset, Elvis et al. (2018)|Yes|-|Yes|Yes|-|Yes|Yes|
|GoEmotions, Demszky et al. (2020)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|ISEAR, Vikash (2018)|Yes|Yes|Yes|Yes|-|Yes|-|
|MELD, Poria et al. (2019)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|SemEval-2018, EI-reg, Mohammad et al. (2018) |Yes|-|Yes|Yes|-|Yes|-|
It was fine-tuned on:
|Name|anger|disgust|fear|joy|neutral|sadness|surprise|
|---|---|---|---|---|---|---|---|
|Emotion Lines (Friends)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
# How to Use
```python
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="michellejieli/emotion_text_classifier")
classifier("I love this!")
```
```python
Output:
[{'label': 'joy', 'score': 0.9887555241584778}]
```
# Contact
Please reach out to [michelleli1999@gmail.com](mailto:michelleli1999@gmail.com) if you have any questions or feedback.
# Reference
```
Jochen Hartmann, "Emotion English DistilRoBERTa-base". https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/, 2022.
Ashritha R Murthy and K M Anil Kumar 2021 IOP Conf. Ser.: Mater. Sci. Eng. 1110 012009
``` |
MaziyarPanahi/Codestral-22B-v0.1-GGUF | MaziyarPanahi | "2024-06-01T18:42:52Z" | 259,704 | 8 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"llama-3",
"llama",
"base_model:bullerwins/Codestral-22B-v0.1-hf",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-29T18:43:32Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- llama-3
- llama
- text-generation
model_name: Codestral-22B-v0.1-hf-GGUF
base_model: bullerwins/Codestral-22B-v0.1-hf
inference: false
model_creator: bullerwins
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Codestral-22B-v0.1-hf-GGUF](https://huggingface.co/MaziyarPanahi/Codestral-22B-v0.1-hf-GGUF)
- Model creator: [bullerwins](https://huggingface.co/bullerwins)
- Original model: [bullerwins/Codestral-22B-v0.1-hf](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf)
## Description
[MaziyarPanahi/Codestral-22B-v0.1-hf-GGUF](https://huggingface.co/MaziyarPanahi/Codestral-22B-v0.1-hf-GGUF) contains GGUF format model files for [bullerwins/Codestral-22B-v0.1-hf](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
bigscience/bloomz-560m | bigscience | "2023-05-27T17:27:11Z" | 257,136 | 95 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"dataset:bigscience/xP3",
"arxiv:2211.01786",
"license:bigscience-bloom-rail-1.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-10-08T16:14:42Z" | ---
datasets:
- bigscience/xP3
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
widget:
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?"
example_title: "zh-en sentiment"
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?"
example_title: "zh-zh sentiment"
- text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"."
example_title: "vi-en query"
- text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»."
example_title: "fr-fr query"
- text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
example_title: "te-en qa"
- text: "Why is the sky blue?"
example_title: "en-en qa"
- text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
example_title: "es-en fable"
- text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
example_title: "hi-en fable"
model-index:
- name: bloomz-560m
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 52.41
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 51.01
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 51.81
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 52.03
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 53.99
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 53.97
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 54.76
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 33.4
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 33.4
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 33.5
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 53.57
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 67.15
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.46
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 39.76
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 39.36
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 40.96
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 46.43
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.98
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 45.54
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 41.81
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 39.64
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 38.35
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 35.5
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 37.31
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 38.96
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.74
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.66
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 2.18
- type: Pass@10
value: 4.11
- type: Pass@100
value: 9.00
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: "2016"
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 60.29
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 52.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 53.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 49.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 57.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 52.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 55.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 54.4
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 56.45
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 50.56
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 55.79
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 57.84
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 47.05
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 53.14
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 51.36
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 54.86
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 56.52
---
![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true)
# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-560m"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-560m"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-560m"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom-560m](https://huggingface.co/bigscience/bloom-560m), also refer to the `config.json` file
- **Finetuning steps:** 1750
- **Finetuning tokens:** 3.67 billion
- **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 1x data parallel
- **Precision:** float16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
``` |
Qwen/Qwen2-0.5B-Instruct-GGUF | Qwen | "2024-06-18T03:25:38Z" | 256,888 | 33 | null | [
"gguf",
"instruct",
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-06T08:05:51Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- instruct
- chat
license: apache-2.0
---
# Qwen2-0.5B-Instruct-GGUF
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model (57B-A14B). This repo contains the instruction-tuned 0.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
In this repo, we provide quantized models in the GGUF formats, including `q2_k`, `q3_k_m`, `q4_0`, `q4_k_m`, `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
## How to use
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
```shell
huggingface-cli download Qwen/Qwen2-0.5B-Instruct-GGUF qwen2-0_5b-instruct-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
```
To run Qwen2, you can use `llama-cli` (the previous `main`) or `llama-server` (the previous `server`).
We recommend using the `llama-server` as it is simple and compatible with OpenAI API. For example:
```bash
./llama-server -m qwen2-0_5b-instruct-q5_k_m.gguf -ngl 24 -fa
```
(Note: `-ngl 24` refers to offloading 24 layers to GPUs, and `-fa` refers to the use of flash attention.)
Then it is easy to access the deployed service with OpenAI API:
```python
import openai
client = openai.OpenAI(
base_url="http://localhost:8080/v1", # "http://<Your api-server IP>:port"
api_key = "sk-no-key-required"
)
completion = client.chat.completions.create(
model="qwen",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "tell me something about michael jordan"}
]
)
print(completion.choices[0].message.content)
```
If you choose to use `llama-cli`, pay attention to the removal of `-cml` for the ChatML template. Instead you should use `--in-prefix` and `--in-suffix` to tackle this problem.
```bash
./llama-cli -m qwen2-0_5b-instruct-q5_k_m.gguf \
-n 512 -co -i -if -f prompts/chat-with-qwen.txt \
--in-prefix "<|im_start|>user\n" \
--in-suffix "<|im_end|>\n<|im_start|>assistant\n" \
-ngl 24 -fa
```
## Evaluation
We implement perplexity evaluation using wikitext following the practice of `llama.cpp` with `./llama-perplexity` (the previous `./perplexity`).
In the following we report the PPL of GGUF models of different sizes and different quantization levels.
|Size | fp16 | q8_0 | q6_k | q5_k_m | q5_0 | q4_k_m | q4_0 | q3_k_m | q2_k | iq1_m |
|--------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
|0.5B | 15.11 | 15.13 | 15.14 | 15.24 | 15.40 | 15.36 | 16.28 | 15.70 | 16.74 | - |
|1.5B | 10.43 | 10.43 | 10.45 | 10.50 | 10.56 | 10.61 | 10.79 | 11.08 | 13.04 | - |
|7B | 7.93 | 7.94 | 7.96 | 7.97 | 7.98 | 8.02 | 8.19 | 8.20 | 10.58 | - |
|57B-A14B| 6.81 | 6.81 | 6.83 | 6.84 | 6.89 | 6.99 | 7.02 | 7.43 | - | - |
|72B | 5.58 | 5.58 | 5.59 | 5.59 | 5.60 | 5.61 | 5.66 | 5.68 | 5.91 | 6.75 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
thenlper/gte-large | thenlper | "2024-02-05T07:16:01Z" | 256,397 | 229 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"Sentence Transformers",
"en",
"arxiv:2308.03281",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-07-27T09:55:39Z" | ---
tags:
- mteb
- sentence-similarity
- sentence-transformers
- Sentence Transformers
model-index:
- name: gte-large
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.62686567164178
- type: ap
value: 34.46944126809772
- type: f1
value: 66.23684353950857
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.51805
- type: ap
value: 89.49842783330848
- type: f1
value: 92.51112169431808
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.074
- type: f1
value: 48.44785682572955
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.077
- type: map_at_10
value: 48.153
- type: map_at_100
value: 48.963
- type: map_at_1000
value: 48.966
- type: map_at_3
value: 43.184
- type: map_at_5
value: 46.072
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.54
- type: mrr_at_100
value: 49.335
- type: mrr_at_1000
value: 49.338
- type: mrr_at_3
value: 43.563
- type: mrr_at_5
value: 46.383
- type: ndcg_at_1
value: 32.077
- type: ndcg_at_10
value: 57.158
- type: ndcg_at_100
value: 60.324999999999996
- type: ndcg_at_1000
value: 60.402
- type: ndcg_at_3
value: 46.934
- type: ndcg_at_5
value: 52.158
- type: precision_at_1
value: 32.077
- type: precision_at_10
value: 8.591999999999999
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.275000000000002
- type: precision_at_5
value: 14.111
- type: recall_at_1
value: 32.077
- type: recall_at_10
value: 85.917
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 57.824
- type: recall_at_5
value: 70.555
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.619246083417295
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.3574067664688
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.06359661829253
- type: mrr
value: 76.15596007562766
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 90.25407547368691
- type: cos_sim_spearman
value: 88.65081514968477
- type: euclidean_pearson
value: 88.14857116664494
- type: euclidean_spearman
value: 88.50683596540692
- type: manhattan_pearson
value: 87.9654797992225
- type: manhattan_spearman
value: 88.21164851646908
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.05844155844157
- type: f1
value: 86.01555597681825
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10510519739522
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.84689960264385
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.800000000000004
- type: map_at_10
value: 44.857
- type: map_at_100
value: 46.512
- type: map_at_1000
value: 46.635
- type: map_at_3
value: 41.062
- type: map_at_5
value: 43.126
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 50.879
- type: mrr_at_100
value: 51.605000000000004
- type: mrr_at_1000
value: 51.641000000000005
- type: mrr_at_3
value: 48.14
- type: mrr_at_5
value: 49.835
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 51.819
- type: ndcg_at_100
value: 57.318999999999996
- type: ndcg_at_1000
value: 58.955999999999996
- type: ndcg_at_3
value: 46.409
- type: ndcg_at_5
value: 48.825
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 10.072000000000001
- type: precision_at_100
value: 1.625
- type: precision_at_1000
value: 0.21
- type: precision_at_3
value: 22.556
- type: precision_at_5
value: 16.309
- type: recall_at_1
value: 32.800000000000004
- type: recall_at_10
value: 65.078
- type: recall_at_100
value: 87.491
- type: recall_at_1000
value: 97.514
- type: recall_at_3
value: 49.561
- type: recall_at_5
value: 56.135999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.614
- type: map_at_10
value: 43.578
- type: map_at_100
value: 44.897
- type: map_at_1000
value: 45.023
- type: map_at_3
value: 40.282000000000004
- type: map_at_5
value: 42.117
- type: mrr_at_1
value: 40.510000000000005
- type: mrr_at_10
value: 49.428
- type: mrr_at_100
value: 50.068999999999996
- type: mrr_at_1000
value: 50.111000000000004
- type: mrr_at_3
value: 47.176
- type: mrr_at_5
value: 48.583999999999996
- type: ndcg_at_1
value: 40.510000000000005
- type: ndcg_at_10
value: 49.478
- type: ndcg_at_100
value: 53.852
- type: ndcg_at_1000
value: 55.782
- type: ndcg_at_3
value: 45.091
- type: ndcg_at_5
value: 47.19
- type: precision_at_1
value: 40.510000000000005
- type: precision_at_10
value: 9.363000000000001
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 21.741
- type: precision_at_5
value: 15.465000000000002
- type: recall_at_1
value: 32.614
- type: recall_at_10
value: 59.782000000000004
- type: recall_at_100
value: 78.012
- type: recall_at_1000
value: 90.319
- type: recall_at_3
value: 46.825
- type: recall_at_5
value: 52.688
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.266000000000005
- type: map_at_10
value: 53.756
- type: map_at_100
value: 54.809
- type: map_at_1000
value: 54.855
- type: map_at_3
value: 50.073
- type: map_at_5
value: 52.293
- type: mrr_at_1
value: 46.332
- type: mrr_at_10
value: 57.116
- type: mrr_at_100
value: 57.767
- type: mrr_at_1000
value: 57.791000000000004
- type: mrr_at_3
value: 54.461999999999996
- type: mrr_at_5
value: 56.092
- type: ndcg_at_1
value: 46.332
- type: ndcg_at_10
value: 60.092
- type: ndcg_at_100
value: 64.034
- type: ndcg_at_1000
value: 64.937
- type: ndcg_at_3
value: 54.071000000000005
- type: ndcg_at_5
value: 57.254000000000005
- type: precision_at_1
value: 46.332
- type: precision_at_10
value: 9.799
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.368000000000002
- type: precision_at_5
value: 16.89
- type: recall_at_1
value: 40.266000000000005
- type: recall_at_10
value: 75.41499999999999
- type: recall_at_100
value: 92.01700000000001
- type: recall_at_1000
value: 98.379
- type: recall_at_3
value: 59.476
- type: recall_at_5
value: 67.297
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.589
- type: map_at_10
value: 37.755
- type: map_at_100
value: 38.881
- type: map_at_1000
value: 38.954
- type: map_at_3
value: 34.759
- type: map_at_5
value: 36.544
- type: mrr_at_1
value: 30.734
- type: mrr_at_10
value: 39.742
- type: mrr_at_100
value: 40.774
- type: mrr_at_1000
value: 40.824
- type: mrr_at_3
value: 37.137
- type: mrr_at_5
value: 38.719
- type: ndcg_at_1
value: 30.734
- type: ndcg_at_10
value: 42.978
- type: ndcg_at_100
value: 48.309000000000005
- type: ndcg_at_1000
value: 50.068
- type: ndcg_at_3
value: 37.361
- type: ndcg_at_5
value: 40.268
- type: precision_at_1
value: 30.734
- type: precision_at_10
value: 6.565
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 15.744
- type: precision_at_5
value: 11.096
- type: recall_at_1
value: 28.589
- type: recall_at_10
value: 57.126999999999995
- type: recall_at_100
value: 81.051
- type: recall_at_1000
value: 94.027
- type: recall_at_3
value: 42.045
- type: recall_at_5
value: 49.019
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.5
- type: map_at_10
value: 27.950999999999997
- type: map_at_100
value: 29.186
- type: map_at_1000
value: 29.298000000000002
- type: map_at_3
value: 25.141000000000002
- type: map_at_5
value: 26.848
- type: mrr_at_1
value: 22.637
- type: mrr_at_10
value: 32.572
- type: mrr_at_100
value: 33.472
- type: mrr_at_1000
value: 33.533
- type: mrr_at_3
value: 29.747
- type: mrr_at_5
value: 31.482
- type: ndcg_at_1
value: 22.637
- type: ndcg_at_10
value: 33.73
- type: ndcg_at_100
value: 39.568
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.505999999999997
- type: ndcg_at_5
value: 31.255
- type: precision_at_1
value: 22.637
- type: precision_at_10
value: 6.281000000000001
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 13.847000000000001
- type: precision_at_5
value: 10.224
- type: recall_at_1
value: 18.5
- type: recall_at_10
value: 46.744
- type: recall_at_100
value: 72.072
- type: recall_at_1000
value: 91.03999999999999
- type: recall_at_3
value: 32.551
- type: recall_at_5
value: 39.533
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.602
- type: map_at_10
value: 42.18
- type: map_at_100
value: 43.6
- type: map_at_1000
value: 43.704
- type: map_at_3
value: 38.413000000000004
- type: map_at_5
value: 40.626
- type: mrr_at_1
value: 37.344
- type: mrr_at_10
value: 47.638000000000005
- type: mrr_at_100
value: 48.485
- type: mrr_at_1000
value: 48.52
- type: mrr_at_3
value: 44.867000000000004
- type: mrr_at_5
value: 46.566
- type: ndcg_at_1
value: 37.344
- type: ndcg_at_10
value: 48.632
- type: ndcg_at_100
value: 54.215
- type: ndcg_at_1000
value: 55.981
- type: ndcg_at_3
value: 42.681999999999995
- type: ndcg_at_5
value: 45.732
- type: precision_at_1
value: 37.344
- type: precision_at_10
value: 8.932
- type: precision_at_100
value: 1.376
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 20.276
- type: precision_at_5
value: 14.726
- type: recall_at_1
value: 30.602
- type: recall_at_10
value: 62.273
- type: recall_at_100
value: 85.12100000000001
- type: recall_at_1000
value: 96.439
- type: recall_at_3
value: 45.848
- type: recall_at_5
value: 53.615
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.952
- type: map_at_10
value: 35.177
- type: map_at_100
value: 36.59
- type: map_at_1000
value: 36.703
- type: map_at_3
value: 31.261
- type: map_at_5
value: 33.222
- type: mrr_at_1
value: 29.337999999999997
- type: mrr_at_10
value: 40.152
- type: mrr_at_100
value: 40.963
- type: mrr_at_1000
value: 41.016999999999996
- type: mrr_at_3
value: 36.91
- type: mrr_at_5
value: 38.685
- type: ndcg_at_1
value: 29.337999999999997
- type: ndcg_at_10
value: 41.994
- type: ndcg_at_100
value: 47.587
- type: ndcg_at_1000
value: 49.791000000000004
- type: ndcg_at_3
value: 35.27
- type: ndcg_at_5
value: 38.042
- type: precision_at_1
value: 29.337999999999997
- type: precision_at_10
value: 8.276
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 17.161
- type: precision_at_5
value: 12.671
- type: recall_at_1
value: 23.952
- type: recall_at_10
value: 57.267
- type: recall_at_100
value: 80.886
- type: recall_at_1000
value: 95.611
- type: recall_at_3
value: 38.622
- type: recall_at_5
value: 45.811
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.092083333333335
- type: map_at_10
value: 37.2925
- type: map_at_100
value: 38.57041666666666
- type: map_at_1000
value: 38.68141666666667
- type: map_at_3
value: 34.080000000000005
- type: map_at_5
value: 35.89958333333333
- type: mrr_at_1
value: 31.94758333333333
- type: mrr_at_10
value: 41.51049999999999
- type: mrr_at_100
value: 42.36099999999999
- type: mrr_at_1000
value: 42.4125
- type: mrr_at_3
value: 38.849583333333335
- type: mrr_at_5
value: 40.448249999999994
- type: ndcg_at_1
value: 31.94758333333333
- type: ndcg_at_10
value: 43.17633333333333
- type: ndcg_at_100
value: 48.45241666666668
- type: ndcg_at_1000
value: 50.513999999999996
- type: ndcg_at_3
value: 37.75216666666667
- type: ndcg_at_5
value: 40.393833333333326
- type: precision_at_1
value: 31.94758333333333
- type: precision_at_10
value: 7.688916666666666
- type: precision_at_100
value: 1.2250833333333333
- type: precision_at_1000
value: 0.1595
- type: precision_at_3
value: 17.465999999999998
- type: precision_at_5
value: 12.548083333333333
- type: recall_at_1
value: 27.092083333333335
- type: recall_at_10
value: 56.286583333333326
- type: recall_at_100
value: 79.09033333333333
- type: recall_at_1000
value: 93.27483333333335
- type: recall_at_3
value: 41.35325
- type: recall_at_5
value: 48.072750000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.825
- type: map_at_10
value: 33.723
- type: map_at_100
value: 34.74
- type: map_at_1000
value: 34.824
- type: map_at_3
value: 31.369000000000003
- type: map_at_5
value: 32.533
- type: mrr_at_1
value: 29.293999999999997
- type: mrr_at_10
value: 36.84
- type: mrr_at_100
value: 37.681
- type: mrr_at_1000
value: 37.742
- type: mrr_at_3
value: 34.79
- type: mrr_at_5
value: 35.872
- type: ndcg_at_1
value: 29.293999999999997
- type: ndcg_at_10
value: 38.385999999999996
- type: ndcg_at_100
value: 43.327
- type: ndcg_at_1000
value: 45.53
- type: ndcg_at_3
value: 33.985
- type: ndcg_at_5
value: 35.817
- type: precision_at_1
value: 29.293999999999997
- type: precision_at_10
value: 6.12
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 14.621999999999998
- type: precision_at_5
value: 10.030999999999999
- type: recall_at_1
value: 25.825
- type: recall_at_10
value: 49.647000000000006
- type: recall_at_100
value: 72.32300000000001
- type: recall_at_1000
value: 88.62400000000001
- type: recall_at_3
value: 37.366
- type: recall_at_5
value: 41.957
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.139
- type: map_at_10
value: 26.107000000000003
- type: map_at_100
value: 27.406999999999996
- type: map_at_1000
value: 27.535999999999998
- type: map_at_3
value: 23.445
- type: map_at_5
value: 24.916
- type: mrr_at_1
value: 21.817
- type: mrr_at_10
value: 29.99
- type: mrr_at_100
value: 31.052000000000003
- type: mrr_at_1000
value: 31.128
- type: mrr_at_3
value: 27.627000000000002
- type: mrr_at_5
value: 29.005
- type: ndcg_at_1
value: 21.817
- type: ndcg_at_10
value: 31.135
- type: ndcg_at_100
value: 37.108000000000004
- type: ndcg_at_1000
value: 39.965
- type: ndcg_at_3
value: 26.439
- type: ndcg_at_5
value: 28.655
- type: precision_at_1
value: 21.817
- type: precision_at_10
value: 5.757000000000001
- type: precision_at_100
value: 1.036
- type: precision_at_1000
value: 0.147
- type: precision_at_3
value: 12.537
- type: precision_at_5
value: 9.229
- type: recall_at_1
value: 18.139
- type: recall_at_10
value: 42.272999999999996
- type: recall_at_100
value: 68.657
- type: recall_at_1000
value: 88.93799999999999
- type: recall_at_3
value: 29.266
- type: recall_at_5
value: 34.892
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.755000000000003
- type: map_at_10
value: 37.384
- type: map_at_100
value: 38.56
- type: map_at_1000
value: 38.655
- type: map_at_3
value: 34.214
- type: map_at_5
value: 35.96
- type: mrr_at_1
value: 32.369
- type: mrr_at_10
value: 41.625
- type: mrr_at_100
value: 42.449
- type: mrr_at_1000
value: 42.502
- type: mrr_at_3
value: 38.899
- type: mrr_at_5
value: 40.489999999999995
- type: ndcg_at_1
value: 32.369
- type: ndcg_at_10
value: 43.287
- type: ndcg_at_100
value: 48.504999999999995
- type: ndcg_at_1000
value: 50.552
- type: ndcg_at_3
value: 37.549
- type: ndcg_at_5
value: 40.204
- type: precision_at_1
value: 32.369
- type: precision_at_10
value: 7.425
- type: precision_at_100
value: 1.134
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 17.102
- type: precision_at_5
value: 12.107999999999999
- type: recall_at_1
value: 27.755000000000003
- type: recall_at_10
value: 57.071000000000005
- type: recall_at_100
value: 79.456
- type: recall_at_1000
value: 93.54299999999999
- type: recall_at_3
value: 41.298
- type: recall_at_5
value: 48.037
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.855
- type: map_at_10
value: 34.53
- type: map_at_100
value: 36.167
- type: map_at_1000
value: 36.394999999999996
- type: map_at_3
value: 31.037
- type: map_at_5
value: 33.119
- type: mrr_at_1
value: 30.631999999999998
- type: mrr_at_10
value: 39.763999999999996
- type: mrr_at_100
value: 40.77
- type: mrr_at_1000
value: 40.826
- type: mrr_at_3
value: 36.495
- type: mrr_at_5
value: 38.561
- type: ndcg_at_1
value: 30.631999999999998
- type: ndcg_at_10
value: 40.942
- type: ndcg_at_100
value: 47.07
- type: ndcg_at_1000
value: 49.363
- type: ndcg_at_3
value: 35.038000000000004
- type: ndcg_at_5
value: 38.161
- type: precision_at_1
value: 30.631999999999998
- type: precision_at_10
value: 7.983999999999999
- type: precision_at_100
value: 1.6070000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.253
- type: recall_at_1
value: 24.855
- type: recall_at_10
value: 53.291999999999994
- type: recall_at_100
value: 80.283
- type: recall_at_1000
value: 94.309
- type: recall_at_3
value: 37.257
- type: recall_at_5
value: 45.282
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.208
- type: map_at_10
value: 30.512
- type: map_at_100
value: 31.496000000000002
- type: map_at_1000
value: 31.595000000000002
- type: map_at_3
value: 27.904
- type: map_at_5
value: 29.491
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 32.379999999999995
- type: mrr_at_100
value: 33.245000000000005
- type: mrr_at_1000
value: 33.315
- type: mrr_at_3
value: 29.945
- type: mrr_at_5
value: 31.488
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 35.643
- type: ndcg_at_100
value: 40.535
- type: ndcg_at_1000
value: 43.042
- type: ndcg_at_3
value: 30.625000000000004
- type: ndcg_at_5
value: 33.323
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 13.431999999999999
- type: precision_at_5
value: 9.575
- type: recall_at_1
value: 21.208
- type: recall_at_10
value: 49.47
- type: recall_at_100
value: 71.71499999999999
- type: recall_at_1000
value: 90.55499999999999
- type: recall_at_3
value: 36.124
- type: recall_at_5
value: 42.606
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.363
- type: map_at_10
value: 20.312
- type: map_at_100
value: 22.225
- type: map_at_1000
value: 22.411
- type: map_at_3
value: 16.68
- type: map_at_5
value: 18.608
- type: mrr_at_1
value: 25.537
- type: mrr_at_10
value: 37.933
- type: mrr_at_100
value: 38.875
- type: mrr_at_1000
value: 38.911
- type: mrr_at_3
value: 34.387
- type: mrr_at_5
value: 36.51
- type: ndcg_at_1
value: 25.537
- type: ndcg_at_10
value: 28.82
- type: ndcg_at_100
value: 36.341
- type: ndcg_at_1000
value: 39.615
- type: ndcg_at_3
value: 23.01
- type: ndcg_at_5
value: 25.269000000000002
- type: precision_at_1
value: 25.537
- type: precision_at_10
value: 9.153
- type: precision_at_100
value: 1.7319999999999998
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 17.22
- type: precision_at_5
value: 13.629
- type: recall_at_1
value: 11.363
- type: recall_at_10
value: 35.382999999999996
- type: recall_at_100
value: 61.367000000000004
- type: recall_at_1000
value: 79.699
- type: recall_at_3
value: 21.495
- type: recall_at_5
value: 27.42
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.65
- type: map_at_10
value: 20.742
- type: map_at_100
value: 29.614
- type: map_at_1000
value: 31.373
- type: map_at_3
value: 14.667
- type: map_at_5
value: 17.186
- type: mrr_at_1
value: 69.75
- type: mrr_at_10
value: 76.762
- type: mrr_at_100
value: 77.171
- type: mrr_at_1000
value: 77.179
- type: mrr_at_3
value: 75.125
- type: mrr_at_5
value: 76.287
- type: ndcg_at_1
value: 57.62500000000001
- type: ndcg_at_10
value: 42.370999999999995
- type: ndcg_at_100
value: 47.897
- type: ndcg_at_1000
value: 55.393
- type: ndcg_at_3
value: 46.317
- type: ndcg_at_5
value: 43.906
- type: precision_at_1
value: 69.75
- type: precision_at_10
value: 33.95
- type: precision_at_100
value: 10.885
- type: precision_at_1000
value: 2.2239999999999998
- type: precision_at_3
value: 49.75
- type: precision_at_5
value: 42.3
- type: recall_at_1
value: 9.65
- type: recall_at_10
value: 26.117
- type: recall_at_100
value: 55.084
- type: recall_at_1000
value: 78.62400000000001
- type: recall_at_3
value: 15.823
- type: recall_at_5
value: 19.652
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.885
- type: f1
value: 42.99567641346983
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.97
- type: map_at_10
value: 80.34599999999999
- type: map_at_100
value: 80.571
- type: map_at_1000
value: 80.584
- type: map_at_3
value: 79.279
- type: map_at_5
value: 79.94
- type: mrr_at_1
value: 76.613
- type: mrr_at_10
value: 85.15700000000001
- type: mrr_at_100
value: 85.249
- type: mrr_at_1000
value: 85.252
- type: mrr_at_3
value: 84.33800000000001
- type: mrr_at_5
value: 84.89
- type: ndcg_at_1
value: 76.613
- type: ndcg_at_10
value: 84.53399999999999
- type: ndcg_at_100
value: 85.359
- type: ndcg_at_1000
value: 85.607
- type: ndcg_at_3
value: 82.76599999999999
- type: ndcg_at_5
value: 83.736
- type: precision_at_1
value: 76.613
- type: precision_at_10
value: 10.206
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 31.913000000000004
- type: precision_at_5
value: 19.769000000000002
- type: recall_at_1
value: 70.97
- type: recall_at_10
value: 92.674
- type: recall_at_100
value: 95.985
- type: recall_at_1000
value: 97.57000000000001
- type: recall_at_3
value: 87.742
- type: recall_at_5
value: 90.28
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.494
- type: map_at_10
value: 36.491
- type: map_at_100
value: 38.550000000000004
- type: map_at_1000
value: 38.726
- type: map_at_3
value: 31.807000000000002
- type: map_at_5
value: 34.299
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.146
- type: mrr_at_100
value: 54.013999999999996
- type: mrr_at_1000
value: 54.044000000000004
- type: mrr_at_3
value: 50.952
- type: mrr_at_5
value: 52.124
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 44.499
- type: ndcg_at_100
value: 51.629000000000005
- type: ndcg_at_1000
value: 54.367
- type: ndcg_at_3
value: 40.900999999999996
- type: ndcg_at_5
value: 41.737
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.346
- type: precision_at_100
value: 1.974
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 27.366
- type: precision_at_5
value: 19.846
- type: recall_at_1
value: 22.494
- type: recall_at_10
value: 51.156
- type: recall_at_100
value: 77.11200000000001
- type: recall_at_1000
value: 93.44
- type: recall_at_3
value: 36.574
- type: recall_at_5
value: 42.361
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.568999999999996
- type: map_at_10
value: 58.485
- type: map_at_100
value: 59.358999999999995
- type: map_at_1000
value: 59.429
- type: map_at_3
value: 55.217000000000006
- type: map_at_5
value: 57.236
- type: mrr_at_1
value: 77.137
- type: mrr_at_10
value: 82.829
- type: mrr_at_100
value: 83.04599999999999
- type: mrr_at_1000
value: 83.05399999999999
- type: mrr_at_3
value: 81.904
- type: mrr_at_5
value: 82.50800000000001
- type: ndcg_at_1
value: 77.137
- type: ndcg_at_10
value: 67.156
- type: ndcg_at_100
value: 70.298
- type: ndcg_at_1000
value: 71.65700000000001
- type: ndcg_at_3
value: 62.535
- type: ndcg_at_5
value: 65.095
- type: precision_at_1
value: 77.137
- type: precision_at_10
value: 13.911999999999999
- type: precision_at_100
value: 1.6389999999999998
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 39.572
- type: precision_at_5
value: 25.766
- type: recall_at_1
value: 38.568999999999996
- type: recall_at_10
value: 69.56099999999999
- type: recall_at_100
value: 81.931
- type: recall_at_1000
value: 90.91799999999999
- type: recall_at_3
value: 59.358999999999995
- type: recall_at_5
value: 64.416
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.45600000000002
- type: ap
value: 84.09725115338568
- type: f1
value: 88.41874909080512
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.404999999999998
- type: map_at_10
value: 33.921
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.164
- type: map_at_3
value: 30.043999999999997
- type: map_at_5
value: 32.327
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 34.505
- type: mrr_at_100
value: 35.638999999999996
- type: mrr_at_1000
value: 35.68
- type: mrr_at_3
value: 30.703999999999997
- type: mrr_at_5
value: 32.96
- type: ndcg_at_1
value: 21.963
- type: ndcg_at_10
value: 40.859
- type: ndcg_at_100
value: 46.614
- type: ndcg_at_1000
value: 47.789
- type: ndcg_at_3
value: 33.007999999999996
- type: ndcg_at_5
value: 37.084
- type: precision_at_1
value: 21.963
- type: precision_at_10
value: 6.493
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.155000000000001
- type: precision_at_5
value: 10.544
- type: recall_at_1
value: 21.404999999999998
- type: recall_at_10
value: 62.175000000000004
- type: recall_at_100
value: 88.786
- type: recall_at_1000
value: 97.738
- type: recall_at_3
value: 40.925
- type: recall_at_5
value: 50.722
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.50661194710442
- type: f1
value: 93.30311193153668
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.24669402644778
- type: f1
value: 54.23122108002977
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.61936785474109
- type: f1
value: 70.52644941025565
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.76529926025555
- type: f1
value: 77.26872729322514
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.39450293021839
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.757796879839294
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.62512146657428
- type: mrr
value: 33.84624322066173
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.462
- type: map_at_10
value: 14.947
- type: map_at_100
value: 19.344
- type: map_at_1000
value: 20.933
- type: map_at_3
value: 10.761999999999999
- type: map_at_5
value: 12.744
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 57.365
- type: mrr_at_100
value: 57.931
- type: mrr_at_1000
value: 57.96
- type: mrr_at_3
value: 54.85
- type: mrr_at_5
value: 56.569
- type: ndcg_at_1
value: 46.129999999999995
- type: ndcg_at_10
value: 38.173
- type: ndcg_at_100
value: 35.983
- type: ndcg_at_1000
value: 44.507000000000005
- type: ndcg_at_3
value: 42.495
- type: ndcg_at_5
value: 41.019
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 28.731
- type: precision_at_100
value: 9.232
- type: precision_at_1000
value: 2.202
- type: precision_at_3
value: 39.628
- type: precision_at_5
value: 35.851
- type: recall_at_1
value: 6.462
- type: recall_at_10
value: 18.968
- type: recall_at_100
value: 37.131
- type: recall_at_1000
value: 67.956
- type: recall_at_3
value: 11.905000000000001
- type: recall_at_5
value: 15.097
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.335
- type: map_at_10
value: 46.611999999999995
- type: map_at_100
value: 47.632000000000005
- type: map_at_1000
value: 47.661
- type: map_at_3
value: 41.876999999999995
- type: map_at_5
value: 44.799
- type: mrr_at_1
value: 34.125
- type: mrr_at_10
value: 49.01
- type: mrr_at_100
value: 49.75
- type: mrr_at_1000
value: 49.768
- type: mrr_at_3
value: 45.153
- type: mrr_at_5
value: 47.589999999999996
- type: ndcg_at_1
value: 34.125
- type: ndcg_at_10
value: 54.777
- type: ndcg_at_100
value: 58.914
- type: ndcg_at_1000
value: 59.521
- type: ndcg_at_3
value: 46.015
- type: ndcg_at_5
value: 50.861000000000004
- type: precision_at_1
value: 34.125
- type: precision_at_10
value: 9.166
- type: precision_at_100
value: 1.149
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 21.147
- type: precision_at_5
value: 15.469
- type: recall_at_1
value: 30.335
- type: recall_at_10
value: 77.194
- type: recall_at_100
value: 94.812
- type: recall_at_1000
value: 99.247
- type: recall_at_3
value: 54.681000000000004
- type: recall_at_5
value: 65.86800000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.62
- type: map_at_10
value: 84.536
- type: map_at_100
value: 85.167
- type: map_at_1000
value: 85.184
- type: map_at_3
value: 81.607
- type: map_at_5
value: 83.423
- type: mrr_at_1
value: 81.36
- type: mrr_at_10
value: 87.506
- type: mrr_at_100
value: 87.601
- type: mrr_at_1000
value: 87.601
- type: mrr_at_3
value: 86.503
- type: mrr_at_5
value: 87.179
- type: ndcg_at_1
value: 81.36
- type: ndcg_at_10
value: 88.319
- type: ndcg_at_100
value: 89.517
- type: ndcg_at_1000
value: 89.60900000000001
- type: ndcg_at_3
value: 85.423
- type: ndcg_at_5
value: 86.976
- type: precision_at_1
value: 81.36
- type: precision_at_10
value: 13.415
- type: precision_at_100
value: 1.529
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.342999999999996
- type: precision_at_5
value: 24.534
- type: recall_at_1
value: 70.62
- type: recall_at_10
value: 95.57600000000001
- type: recall_at_100
value: 99.624
- type: recall_at_1000
value: 99.991
- type: recall_at_3
value: 87.22
- type: recall_at_5
value: 91.654
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.826438478212744
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.24027467551447
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.997999999999999
- type: map_at_10
value: 14.267
- type: map_at_100
value: 16.843
- type: map_at_1000
value: 17.229
- type: map_at_3
value: 9.834
- type: map_at_5
value: 11.92
- type: mrr_at_1
value: 24.7
- type: mrr_at_10
value: 37.685
- type: mrr_at_100
value: 38.704
- type: mrr_at_1000
value: 38.747
- type: mrr_at_3
value: 34.150000000000006
- type: mrr_at_5
value: 36.075
- type: ndcg_at_1
value: 24.7
- type: ndcg_at_10
value: 23.44
- type: ndcg_at_100
value: 32.617000000000004
- type: ndcg_at_1000
value: 38.628
- type: ndcg_at_3
value: 21.747
- type: ndcg_at_5
value: 19.076
- type: precision_at_1
value: 24.7
- type: precision_at_10
value: 12.47
- type: precision_at_100
value: 2.564
- type: precision_at_1000
value: 0.4
- type: precision_at_3
value: 20.767
- type: precision_at_5
value: 17.06
- type: recall_at_1
value: 4.997999999999999
- type: recall_at_10
value: 25.3
- type: recall_at_100
value: 52.048
- type: recall_at_1000
value: 81.093
- type: recall_at_3
value: 12.642999999999999
- type: recall_at_5
value: 17.312
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.44942006292234
- type: cos_sim_spearman
value: 79.80930790660699
- type: euclidean_pearson
value: 82.93400777494863
- type: euclidean_spearman
value: 80.04664991110705
- type: manhattan_pearson
value: 82.93551681854949
- type: manhattan_spearman
value: 80.03156736837379
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.63574059135726
- type: cos_sim_spearman
value: 76.80552915288186
- type: euclidean_pearson
value: 82.46368529820518
- type: euclidean_spearman
value: 76.60338474719275
- type: manhattan_pearson
value: 82.4558617035968
- type: manhattan_spearman
value: 76.57936082895705
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.24116811084211
- type: cos_sim_spearman
value: 88.10998662068769
- type: euclidean_pearson
value: 87.04961732352689
- type: euclidean_spearman
value: 88.12543945864087
- type: manhattan_pearson
value: 86.9905224528854
- type: manhattan_spearman
value: 88.07827944705546
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.74847296555048
- type: cos_sim_spearman
value: 82.66200957916445
- type: euclidean_pearson
value: 84.48132256004965
- type: euclidean_spearman
value: 82.67915286000596
- type: manhattan_pearson
value: 84.44950477268334
- type: manhattan_spearman
value: 82.63327639173352
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.23056258027053
- type: cos_sim_spearman
value: 88.92791680286955
- type: euclidean_pearson
value: 88.13819235461933
- type: euclidean_spearman
value: 88.87294661361716
- type: manhattan_pearson
value: 88.14212133687899
- type: manhattan_spearman
value: 88.88551854529777
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.64179522732887
- type: cos_sim_spearman
value: 84.25028809903114
- type: euclidean_pearson
value: 83.40175015236979
- type: euclidean_spearman
value: 84.23369296429406
- type: manhattan_pearson
value: 83.43768174261321
- type: manhattan_spearman
value: 84.27855229214734
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.20378955494732
- type: cos_sim_spearman
value: 88.46863559173111
- type: euclidean_pearson
value: 88.8249295811663
- type: euclidean_spearman
value: 88.6312737724905
- type: manhattan_pearson
value: 88.87744466378827
- type: manhattan_spearman
value: 88.82908423767314
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.91342028796086
- type: cos_sim_spearman
value: 69.71495021867864
- type: euclidean_pearson
value: 70.65334330405646
- type: euclidean_spearman
value: 69.4321253472211
- type: manhattan_pearson
value: 70.59743494727465
- type: manhattan_spearman
value: 69.11695509297482
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.42451709766952
- type: cos_sim_spearman
value: 86.07166710670508
- type: euclidean_pearson
value: 86.12711421258899
- type: euclidean_spearman
value: 86.05232086925126
- type: manhattan_pearson
value: 86.15591089932126
- type: manhattan_spearman
value: 86.0890128623439
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.1976344717285
- type: mrr
value: 96.3703145075694
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.511
- type: map_at_10
value: 69.724
- type: map_at_100
value: 70.208
- type: map_at_1000
value: 70.22800000000001
- type: map_at_3
value: 66.986
- type: map_at_5
value: 68.529
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.55
- type: mrr_at_100
value: 70.985
- type: mrr_at_1000
value: 71.004
- type: mrr_at_3
value: 68.611
- type: mrr_at_5
value: 69.728
- type: ndcg_at_1
value: 62.333000000000006
- type: ndcg_at_10
value: 74.265
- type: ndcg_at_100
value: 76.361
- type: ndcg_at_1000
value: 76.82900000000001
- type: ndcg_at_3
value: 69.772
- type: ndcg_at_5
value: 71.94800000000001
- type: precision_at_1
value: 62.333000000000006
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.444000000000003
- type: precision_at_5
value: 18
- type: recall_at_1
value: 59.511
- type: recall_at_10
value: 87.156
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 75.2
- type: recall_at_5
value: 80.661
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81683168316832
- type: cos_sim_ap
value: 95.74716566563774
- type: cos_sim_f1
value: 90.64238745574103
- type: cos_sim_precision
value: 91.7093142272262
- type: cos_sim_recall
value: 89.60000000000001
- type: dot_accuracy
value: 99.69405940594059
- type: dot_ap
value: 91.09013507754594
- type: dot_f1
value: 84.54227113556779
- type: dot_precision
value: 84.58458458458459
- type: dot_recall
value: 84.5
- type: euclidean_accuracy
value: 99.81782178217821
- type: euclidean_ap
value: 95.6324301072609
- type: euclidean_f1
value: 90.58341862845445
- type: euclidean_precision
value: 92.76729559748428
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.81980198019802
- type: manhattan_ap
value: 95.68510494437183
- type: manhattan_f1
value: 90.58945191313342
- type: manhattan_precision
value: 93.79014989293361
- type: manhattan_recall
value: 87.6
- type: max_accuracy
value: 99.81980198019802
- type: max_ap
value: 95.74716566563774
- type: max_f1
value: 90.64238745574103
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.63761899427078
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.572473369697235
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.63000245208579
- type: mrr
value: 54.504193722943725
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.300791939416545
- type: cos_sim_spearman
value: 31.662904057924123
- type: dot_pearson
value: 26.21198530758316
- type: dot_spearman
value: 27.006921548904263
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.197
- type: map_at_10
value: 1.752
- type: map_at_100
value: 10.795
- type: map_at_1000
value: 27.18
- type: map_at_3
value: 0.5890000000000001
- type: map_at_5
value: 0.938
- type: mrr_at_1
value: 74
- type: mrr_at_10
value: 85.833
- type: mrr_at_100
value: 85.833
- type: mrr_at_1000
value: 85.833
- type: mrr_at_3
value: 85.333
- type: mrr_at_5
value: 85.833
- type: ndcg_at_1
value: 69
- type: ndcg_at_10
value: 70.22
- type: ndcg_at_100
value: 55.785
- type: ndcg_at_1000
value: 52.93600000000001
- type: ndcg_at_3
value: 72.084
- type: ndcg_at_5
value: 71.184
- type: precision_at_1
value: 74
- type: precision_at_10
value: 75.2
- type: precision_at_100
value: 57.3
- type: precision_at_1000
value: 23.302
- type: precision_at_3
value: 77.333
- type: precision_at_5
value: 75.6
- type: recall_at_1
value: 0.197
- type: recall_at_10
value: 2.019
- type: recall_at_100
value: 14.257
- type: recall_at_1000
value: 50.922
- type: recall_at_3
value: 0.642
- type: recall_at_5
value: 1.043
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.803
- type: map_at_10
value: 10.407
- type: map_at_100
value: 16.948
- type: map_at_1000
value: 18.424
- type: map_at_3
value: 5.405
- type: map_at_5
value: 6.908
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 50.221000000000004
- type: mrr_at_100
value: 51.388
- type: mrr_at_1000
value: 51.402
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.626
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 25.507
- type: ndcg_at_100
value: 38.296
- type: ndcg_at_1000
value: 49.492000000000004
- type: ndcg_at_3
value: 29.006999999999998
- type: ndcg_at_5
value: 25.979000000000003
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 22.041
- type: precision_at_100
value: 8.02
- type: precision_at_1000
value: 1.567
- type: precision_at_3
value: 28.571
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.803
- type: recall_at_10
value: 16.378
- type: recall_at_100
value: 50.489
- type: recall_at_1000
value: 85.013
- type: recall_at_3
value: 6.505
- type: recall_at_5
value: 9.243
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.55579999999999
- type: ap
value: 14.206982753316227
- type: f1
value: 54.372142814964285
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.57611771363893
- type: f1
value: 56.924172639063144
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 52.82304915719759
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.92716218632653
- type: cos_sim_ap
value: 73.73359122546046
- type: cos_sim_f1
value: 68.42559487116262
- type: cos_sim_precision
value: 64.22124508215691
- type: cos_sim_recall
value: 73.21899736147758
- type: dot_accuracy
value: 80.38981939560112
- type: dot_ap
value: 54.61060862444974
- type: dot_f1
value: 53.45710627400769
- type: dot_precision
value: 44.87638839125761
- type: dot_recall
value: 66.09498680738787
- type: euclidean_accuracy
value: 86.02849138701794
- type: euclidean_ap
value: 73.95673761922404
- type: euclidean_f1
value: 68.6783042394015
- type: euclidean_precision
value: 65.1063829787234
- type: euclidean_recall
value: 72.66490765171504
- type: manhattan_accuracy
value: 85.9808070572808
- type: manhattan_ap
value: 73.9050720058029
- type: manhattan_f1
value: 68.57560618983794
- type: manhattan_precision
value: 63.70839936608558
- type: manhattan_recall
value: 74.24802110817942
- type: max_accuracy
value: 86.02849138701794
- type: max_ap
value: 73.95673761922404
- type: max_f1
value: 68.6783042394015
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.72783017037295
- type: cos_sim_ap
value: 85.52705223340233
- type: cos_sim_f1
value: 77.91659078492079
- type: cos_sim_precision
value: 73.93378032764221
- type: cos_sim_recall
value: 82.35294117647058
- type: dot_accuracy
value: 85.41739434159972
- type: dot_ap
value: 77.17734818118443
- type: dot_f1
value: 71.63473589973144
- type: dot_precision
value: 66.96123719622415
- type: dot_recall
value: 77.00954727440714
- type: euclidean_accuracy
value: 88.68125897465751
- type: euclidean_ap
value: 85.47712213906692
- type: euclidean_f1
value: 77.81419950830664
- type: euclidean_precision
value: 75.37162649733006
- type: euclidean_recall
value: 80.42038805050817
- type: manhattan_accuracy
value: 88.67349710870494
- type: manhattan_ap
value: 85.46506475241955
- type: manhattan_f1
value: 77.87259084890393
- type: manhattan_precision
value: 74.54929577464789
- type: manhattan_recall
value: 81.50600554357868
- type: max_accuracy
value: 88.72783017037295
- type: max_ap
value: 85.52705223340233
- type: max_f1
value: 77.91659078492079
language:
- en
license: mit
---
# gte-large
General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Metrics
We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 |
| [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 |
| [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 |
## Usage
Code example
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-large")
model = AutoModel.from_pretrained("thenlper/gte-large")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('thenlper/gte-large')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### Limitation
This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
### Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
``` |
Elron/bleurt-base-128 | Elron | "2021-10-04T13:24:42Z" | 254,996 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:04Z" | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([0.3598, 0.0723])
```
|
MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 | MoritzLaurer | "2024-04-11T13:49:19Z" | 253,833 | 240 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"nli",
"multilingual",
"zh",
"ja",
"ar",
"ko",
"de",
"fr",
"es",
"pt",
"hi",
"id",
"it",
"tr",
"ru",
"bn",
"ur",
"mr",
"ta",
"vi",
"fa",
"pl",
"uk",
"nl",
"sv",
"he",
"sw",
"ps",
"dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7",
"dataset:xnli",
"dataset:multi_nli",
"dataset:facebook/anli",
"dataset:fever",
"dataset:lingnli",
"dataset:alisawuffles/WANLI",
"arxiv:2111.09543",
"arxiv:2104.07179",
"arxiv:1809.05053",
"arxiv:1911.02116",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | "2022-08-22T16:59:35Z" | ---
language:
- multilingual
- zh
- ja
- ar
- ko
- de
- fr
- es
- pt
- hi
- id
- it
- tr
- ru
- bn
- ur
- mr
- ta
- vi
- fa
- pl
- uk
- nl
- sv
- he
- sw
- ps
license: mit
tags:
- zero-shot-classification
- text-classification
- nli
- pytorch
datasets:
- MoritzLaurer/multilingual-NLI-26lang-2mil7
- xnli
- multi_nli
- facebook/anli
- fever
- lingnli
- alisawuffles/WANLI
metrics:
- accuracy
pipeline_tag: zero-shot-classification
widget:
- text: Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU
candidate_labels: politics, economy, entertainment, environment
model-index:
- name: DeBERTa-v3-base-xnli-multilingual-nli-2mil7
results:
- task:
type: text-classification
name: Natural Language Inference
dataset:
name: MultiNLI-matched
type: multi_nli
split: validation_matched
metrics:
- type: accuracy
value: 0,857
verified: false
- task:
type: text-classification
name: Natural Language Inference
dataset:
name: MultiNLI-mismatched
type: multi_nli
split: validation_mismatched
metrics:
- type: accuracy
value: 0,856
verified: false
- task:
type: text-classification
name: Natural Language Inference
dataset:
name: ANLI-all
type: anli
split: test_r1+test_r2+test_r3
metrics:
- type: accuracy
value: 0,537
verified: false
- task:
type: text-classification
name: Natural Language Inference
dataset:
name: ANLI-r3
type: anli
split: test_r3
metrics:
- type: accuracy
value: 0,497
verified: false
- task:
type: text-classification
name: Natural Language Inference
dataset:
name: WANLI
type: alisawuffles/WANLI
split: test
metrics:
- type: accuracy
value: 0,732
verified: false
- task:
type: text-classification
name: Natural Language Inference
dataset:
name: LingNLI
type: lingnli
split: test
metrics:
- type: accuracy
value: 0,788
verified: false
- task:
type: text-classification
name: Natural Language Inference
dataset:
name: fever-nli
type: fever-nli
split: test
metrics:
- type: accuracy
value: 0,761
verified: false
---
# Model card for mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
## Model description
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying mDeBERTa-v3-base model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100) with 100 languages. The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli) and on the [multilingual-NLI-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7). Both datasets contain more than 2.7 million hypothesis-premise pairs in 27 languages spoken by more than 4 billion people.
As of December 2021, mDeBERTa-v3-base is the best performing multilingual base-sized transformer model introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the [multilingual-nli-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) and the [XNLI](https://huggingface.co/datasets/xnli) validation dataset.
The multilingual-nli-26lang-2mil7 dataset contains 2 730 000 NLI hypothesis-premise pairs in 26 languages spoken by more than 4 billion people. The dataset contains 105 000 text pairs per language. It is based on the English datasets [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) and was created using the latest open-source machine translation models. The languages in the dataset are: ['ar', 'bn', 'de', 'es', 'fa', 'fr', 'he', 'hi', 'id', 'it', 'ja', 'ko', 'mr', 'nl', 'pl', 'ps', 'pt', 'ru', 'sv', 'sw', 'ta', 'tr', 'uk', 'ur', 'vi', 'zh'] (see [ISO language codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). For more details, see the [datasheet](XXX). In addition, a sample of 105 000 text pairs was also added for English following the same sampling method as the other languages, leading to 27 languages.
Moreover, for each language a random set of 10% of the hypothesis-premise pairs was added where an English hypothesis was paired with the premise in the other language (and the same for English premises and other language hypotheses). This mix of languages in the text pairs should enable users to formulate a hypothesis in English for a target text in another language.
The [XNLI](https://huggingface.co/datasets/xnli) validation set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that XNLI also contains a training set of 14 machine translated versions of the MultiNLI dataset for 14 languages, but this data was excluded due to quality issues with the machine translations from 2018.
Note that for evaluation purposes, three languages were excluded from the XNLI training data and only included in the test data: ["bg","el","th"]. This was done in order to test the performance of the model on languages it has not seen during NLI fine-tuning on 27 languages, but only during pre-training on 100 languages - see evaluation metrics below.
The total training dataset had a size of 3 287 280 hypothesis-premise pairs.
### Training procedure
mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
gradient_accumulation_steps=2, # to double the effective batch size for
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
fp16=False
)
```
### Eval results
The model was evaluated on the XNLI test set in 15 languages (5010 texts per language, 75150 in total) and the English test sets of [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) . Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able to do NLI on the other 73 languages mDeBERTa was pre-trained on, but performance is most likely lower than for those languages seen during NLI fine-tuning. The performance on the languages ["bg","el","th"] in the table below is a good indicated of this cross-lingual transfer, as these languages were not included in the training data.
|XNLI subsets|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: |:---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.794|0.822|0.824|0.809|0.871|0.832|0.823|0.769|0.803|0.746|0.786|0.792|0.744|0.793|0.803|
|Speed (text/sec, A100-GPU)|1344.0|1355.0|1472.0|1149.0|1697.0|1446.0|1278.0|1115.0|1380.0|1463.0|1713.0|1594.0|1189.0|877.0|1887.0|
|English Datasets|mnli_test_m|mnli_test_mm|anli_test|anli_test_r3|fever_test|ling_test|wanli_test|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.857|0.856|0.537|0.497|0.761|0.788|0.732|0.794|
|Speed (text/sec, A100-GPU)|1000.0|1009.0|794.0|672.0|374.0|1177.0|1468.0|
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
## Limitations and bias
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases. Moreover, note that the multilingual-nli-26lang-2mil7 dataset was created using machine translation, which reduces the quality of the data for a complex task like NLI. You can inspect the data via the Hugging Face [dataset viewer](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) for languages you are interested in. Note that grammatical errors introduced by machine translation are less of an issue for zero-shot classification, for which grammar is less important.
## Citation
If the dataset is useful for you, please cite the following article:
```
@article{laurer_less_2022,
title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
url = {https://osf.io/74b8k},
language = {en-us},
urldate = {2022-07-28},
journal = {Preprint},
author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
month = jun,
year = {2022},
note = {Publisher: Open Science Framework},
}
```
## Ideas for cooperation or questions?
For updates on new models and datasets, follow me on [Twitter](https://twitter.com/MoritzLaurer).
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or on [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
## Debugging and issues
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
|
microsoft/trocr-small-handwritten | microsoft | "2024-05-27T20:11:19Z" | 253,496 | 32 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"trocr",
"image-to-text",
"arxiv:2109.10282",
"endpoints_compatible",
"region:us"
] | image-to-text | "2022-03-02T23:29:05Z" | ---
tags:
- trocr
- image-to-text
widget:
- src: https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg
example_title: Note 1
- src: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSoolxi9yWGAT5SLZShv8vVd0bz47UWRzQC19fDTeE8GmGv_Rn-PCF1pP1rrUx8kOjA4gg&usqp=CAU
example_title: Note 2
- src: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRNYtTuSBpZPV_nkBYPMFwVVD9asZOPgHww4epu9EqWgDmXW--sE2o8og40ZfDGo87j5w&usqp=CAU
example_title: Note 3
---
# TrOCR (small-sized model, fine-tuned on IAM)
TrOCR model fine-tuned on the [IAM dataset](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database). It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of DeiT, while the text decoder was initialized from the weights of UniLM.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import requests
# load image from the IAM database
url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-small-handwritten')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-small-handwritten')
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### BibTeX entry and citation info
```bibtex
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
sentence-transformers/paraphrase-MiniLM-L3-v2 | sentence-transformers | "2024-03-27T12:09:47Z" | 252,598 | 20 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:s2orc",
"dataset:ms_marco",
"dataset:wiki_atomic_edits",
"dataset:snli",
"dataset:multi_nli",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/coco_captions",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/QQP",
"dataset:yahoo_answers_topics",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- s2orc
- ms_marco
- wiki_atomic_edits
- snli
- multi_nli
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/flickr30k-captions
- embedding-data/coco_captions
- embedding-data/sentence-compression
- embedding-data/QQP
- yahoo_answers_topics
pipeline_tag: sentence-similarity
---
# sentence-transformers/paraphrase-MiniLM-L3-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L3-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L3-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
facebook/esmfold_v1 | facebook | "2023-03-22T17:39:28Z" | 252,430 | 19 | transformers | [
"transformers",
"pytorch",
"esm",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2022-11-01T18:24:14Z" | ---
license: mit
---
# ESMFold
ESMFold is a state-of-the-art end-to-end protein folding model based on an ESM-2 backbone. It does not require any lookup or MSA step, and therefore does not require any external databases to be present in order to make predictions. As a result, inference time is very significantly faster than AlphaFold2. For details on the model architecture and training, please refer to the [accompanying paper](https://www.science.org/doi/10.1126/science.ade2574).
If you're interested in using ESMFold in practice, please check out the associated [tutorial notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb). |
facebook/esm2_t6_8M_UR50D | facebook | "2023-03-21T15:05:17Z" | 250,647 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"esm",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-09-26T18:44:55Z" | ---
license: mit
widget:
- text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG"
---
## ESM-2
ESM-2 is a state-of-the-art protein model trained on a masked language modelling objective. It is suitable for fine-tuning on a wide range of tasks that take protein sequences as input. For detailed information on the model architecture and training data, please refer to the [accompanying paper](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v2). You may also be interested in some demo notebooks ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb), [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb)) which demonstrate how to fine-tune ESM-2 models on your tasks of interest.
Several ESM-2 checkpoints are available in the Hub with varying sizes. Larger sizes generally have somewhat better accuracy, but require much more memory and time to train:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [esm2_t48_15B_UR50D](https://huggingface.co/facebook/esm2_t48_15B_UR50D) | 48 | 15B |
| [esm2_t36_3B_UR50D](https://huggingface.co/facebook/esm2_t36_3B_UR50D) | 36 | 3B |
| [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) | 33 | 650M |
| [esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) | 30 | 150M |
| [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) | 12 | 35M |
| [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) | 6 | 8M | |
ghunkins/prompt-expansion | ghunkins | "2023-12-08T18:44:56Z" | 249,059 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-08T18:41:51Z" | ---
license: creativeml-openrail-m
---
|
jinaai/jina-colbert-v1-en | jinaai | "2024-03-02T09:14:45Z" | 248,106 | 84 | transformers | [
"transformers",
"safetensors",
"bert",
"ColBERT",
"passage-retrieval",
"custom_code",
"en",
"dataset:ms_marco",
"arxiv:2310.19923",
"arxiv:2108.12409",
"arxiv:2004.12832",
"arxiv:2112.01488",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-01-23T09:23:52Z" | ---
license: apache-2.0
language:
- en
tags:
- ColBERT
- passage-retrieval
datasets:
- ms_marco
---
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
# Jina-ColBERT
**Jina-ColBERT is a ColBERT-style model but based on JinaBERT so it can support both _8k context length_, _fast and accurate retrieval_.**
[JinaBERT](https://arxiv.org/abs/2310.19923) is a BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. The Jina-ColBERT model is trained on MSMARCO passage ranking dataset, following a very similar training procedure with ColBERTv2. The only difference is that we use `jina-bert-v2-base-en` as the backbone instead of `bert-base-uncased`.
For more information about ColBERT, please refer to the [ColBERTv1](https://arxiv.org/abs/2004.12832) and [ColBERTv2](https://arxiv.org/abs/2112.01488v3) paper, and [the original code](https://github.com/stanford-futuredata/ColBERT).
## Usage
### Installation
To use this model, you will need to install the **latest version** of the ColBERT repository:
```bash
pip install git+https://github.com/stanford-futuredata/ColBERT.git torch
conda install -c conda-forge faiss-gpu # use conda to install the latest version faiss
```
### Indexing
```python
from colbert import Indexer
from colbert.infra import Run, RunConfig, ColBERTConfig
n_gpu: int = 1 # Set your number of available GPUs
experiment: str = "" # Name of the folder where the logs and created indices will be stored
index_name: str = "" # The name of your index, i.e. the name of your vector database
if __name__ == "__main__":
with Run().context(RunConfig(nranks=n_gpu, experiment=experiment)):
config = ColBERTConfig(
doc_maxlen=8192 # Our model supports 8k context length for indexing long documents
)
indexer = Indexer(
checkpoint="jinaai/jina-colbert-v1-en",
config=config,
)
documents = [
"ColBERT is an efficient and effective passage retrieval model.",
"Jina-ColBERT is a ColBERT-style model but based on JinaBERT so it can support both 8k context length.",
"JinaBERT is a BERT architecture that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length.",
"Jina-ColBERT model is trained on MSMARCO passage ranking dataset, following a very similar training procedure with ColBERTv2.",
"Jina-ColBERT achieves the competitive retrieval performance with ColBERTv2.",
"Jina is an easier way to build neural search systems.",
"You can use Jina-ColBERT to build neural search systems with ease.",
# Add more documents here to ensure the clustering work correctly
]
indexer.index(name=index_name, collection=documents)
```
### Searching
```python
from colbert import Searcher
from colbert.infra import Run, RunConfig, ColBERTConfig
n_gpu: int = 0
experiment: str = "" # Name of the folder where the logs and created indices will be stored
index_name: str = "" # Name of your previously created index where the documents you want to search are stored.
k: int = 10 # how many results you want to retrieve
if __name__ == "__main__":
with Run().context(RunConfig(nranks=n_gpu, experiment=experiment)):
config = ColBERTConfig(
query_maxlen=128 # Although the model supports 8k context length, we suggest not to use a very long query, as it may cause significant computational complexity and CUDA memory usage.
)
searcher = Searcher(
index=index_name,
config=config
) # You don't need to specify the checkpoint again, the model name is stored in the index.
query = "How to use ColBERT for indexing long documents?"
results = searcher.search(query, k=k)
# results: tuple of tuples of length k containing ((passage_id, passage_rank, passage_score), ...)
```
### Creating Vectors
```python
from colbert.modeling.checkpoint import Checkpoint
ckpt = Checkpoint("jinaai/jina-colbert-v1-en", colbert_config=ColBERTConfig(root="experiments"))
query_vectors = ckpt.queryFromText(["What does ColBERT do?", "This is a search query?"], bsize=16)
print(query_vectors)
```
Complete working Colab Notebook is [here](https://colab.research.google.com/drive/1-5WGEYPSBNBg-Z0bGFysyvckFuM8imrg)
### Reranking Using ColBERT
```python
from colbert.modeling.checkpoint import Checkpoint
from colbert.infra import ColBERTConfig
query = ["How to use ColBERT for indexing long documents?"]
documents = [
"ColBERT is an efficient and effective passage retrieval model.",
"Jina-ColBERT is a ColBERT-style model but based on JinaBERT so it can support both 8k context length.",
"JinaBERT is a BERT architecture that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length.",
"Jina-ColBERT model is trained on MSMARCO passage ranking dataset, following a very similar training procedure with ColBERTv2.",
]
config = ColBERTConfig(query_maxlen=32, doc_maxlen=512)
ckpt = Checkpoint(args.reranker, colbert_config=colbert_config)
Q = ckpt.queryFromText([all_queries[i]])
D = ckpt.docFromText(all_passages, bsize=32)[0]
D_mask = torch.ones(D.shape[:2], dtype=torch.long)
scores = colbert_score(Q, D, D_mask).flatten().cpu().numpy().tolist()
ranking = numpy.argsort(scores)[::-1]
print(ranking)
```
## Evaluation Results
**TL;DR:** Our Jina-ColBERT achieves the competitive retrieval performance with [ColBERTv2](https://huggingface.co/colbert-ir/colbertv2.0) on all benchmarks, and outperforms ColBERTv2 on datasets in where documents have longer context length.
### In-domain benchmarks
We evaluate the in-domain performance on the dev subset of MSMARCO passage ranking dataset. We follow the same evaluation settings in the ColBERTv2 paper and rerun the results of ColBERTv2 using the released checkpoint.
| Model | MRR@10 | Recall@50 | Recall@1k |
| --- | :---: | :---: | :---: |
| ColBERTv2 | 39.7 | 86.8 | 97.6 |
| Jina-ColBERT-v1 | 39.0 | 85.6 | 96.2 |
### Out-of-domain benchmarks
Following ColBERTv2, we evaluate the out-of-domain performance on 13 public BEIR datasets and use NDCG@10 as the main metric. We follow the same evaluation settings in the ColBERTv2 paper and rerun the results of ColBERTv2 using the released checkpoint.
Note that both ColBERTv2 and Jina-ColBERT-v1 only employ MSMARCO passage ranking dataset for training, so below results are the fully zero-shot performance.
| dataset | ColBERTv2 | Jina-ColBERT-v1 |
| --- | :---: | :---: |
| ArguAna | 46.5 | 49.4 |
| ClimateFEVER | 18.1 | 19.6 |
| DBPedia | 45.2 | 41.3 |
| FEVER | 78.8 | 79.5 |
| FiQA | 35.4 | 36.8 |
| HotPotQA | 67.5 | 65.6 |
| NFCorpus | 33.7 | 33.8 |
| NQ | 56.1 | 54.9 |
| Quora | 85.5 | 82.3 |
| SCIDOCS | 15.4 | 16.9 |
| SciFact | 68.9 | 70.1 |
| TREC-COVID | 72.6 | 75.0 |
| Webis-touché2020 | 26.0 | 27.0 |
| Average | 50.0 | 50.2 |
### Long context datasets
We also evaluate the zero-shot performance on datasets where documents have longer context length and compare with some long-context embedding models. Here we use the [LoCo benchmark](https://www.together.ai/blog/long-context-retrieval-models-with-monarch-mixer), which contains 5 datasets with long context length.
| Model | Used context length | Model max context length | Avg. NDCG@10 |
| --- | :---: | :---: | :---: |
| ColBERTv2 | 512 | 512 | 74.3 |
| Jina-ColBERT-v1 (truncated) | 512* | 8192 | 75.5 |
| Jina-ColBERT-v1 | 8192 | 8192 | 83.7 |
| Jina-embeddings-v2-base-en | 8192 | 8192 | **85.4** |
\* denotes that we truncate the context length to 512 for documents. The context length of queries is all 512.
**To summarize, Jina-ColBERT achieves the comparable retrieval performance with ColBERTv2 on all benchmarks, and outperforms ColBERTv2 on datasets in where documents have longer context length.**
### Reranking Performance
We evaluate the reranking performance of ColBERTv2 and Jina-ColBERT on BEIR. We use BM25 as the first-stage retrieval model. The full evaluation code can be found in [this repo](https://github.com/liuqi6777/eval_reranker).
In summary, Jina-ColBERT outperforms ColBERTv2, even achieving comparable performance with some cross-encoder.
The best model, jina-reranker, will be open-sourced soon!
|BM25|ColBERTv2|Jina-ColBERT|MiniLM-L-6-v2|BGE-reranker-base-v1|BGE-reranker-large-v1|Jina-reranker-base-v1|
| --- | :---: | :---: | :---: | :---: | :---: | :---: |
Arguana |29.99|33.42|33.95|30.67|23.26|25.42|42.59|
Climate-Fever |16.51|20.66|21.87|24.70|31.60|31.98|25.49|
DBPedia |31.80|42.16|41.43|43.90|41.56|43.79|43.68|
FEVER |65.13|81.07|83.49|80.77|87.07|89.11|86.10|
FiQA |23.61|35.60|36.68|34.87|33.17|37.70|41.38|
HotpotQA |63.30|68.84|68.62|72.65|79.04|79.98|75.61|
NFCorpus |33.75|36.69|36.38|36.48|32.71|36.57|37.73|
NQ |30.55|51.27|51.01|52.01|53.55|56.81|56.82|
Quora |78.86|85.18|82.75|82.45|78.44|81.06|87.31|
SCIDOCS |14.90|15.39|16.67|16.28|15.06|16.84|19.56|
SciFact |67.89|70.23|70.95|69.53|70.62|74.14|75.01|
TREC-COVID |59.47|75.00|76.89|74.45|67.46|74.32|82.09|
Webis-touche2020|44.22|32.12|32.56|28.40|34.37|35.66|31.62|
Average |43.08|49.82|50.25|49.78|49.84|52.57|**54.23**|
ColBERT
## Plans
We are planning to improve the performance of Jina-ColBERT by fine-tuning on more datasets in the future.
## Other Models
Additionally, we provide the following embedding models, you can also use them for retrieval.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English bilingual model.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English bilingual model.
- [`jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es): 161 million parameters Spanish-English bilingual model.
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. |
jonatasgrosman/wav2vec2-large-xlsr-53-persian | jonatasgrosman | "2022-12-14T01:57:01Z" | 245,570 | 12 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fa",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: fa
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Persian by Jonatas Grosman
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fa
type: common_voice
args: fa
metrics:
- name: Test WER
type: wer
value: 30.12
- name: Test CER
type: cer
value: 7.37
---
# Fine-tuned XLSR-53 large model for speech recognition in Persian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Persian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-persian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fa"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-persian"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| از مهمونداری کنار بکشم | از مهمانداری کنار بکشم |
| برو از مهرداد بپرس. | برو از ماقدعاد به پرس |
| خب ، تو چیكار می كنی؟ | خوب تو چیکار می کنی |
| مسقط پایتخت عمان در عربی به معنای محل سقوط است | مسقط پایتخت عمان در عربی به بعنای محل سقوط است |
| آه، نه اصلاُ! | اهنه اصلا |
| توانست | توانست |
| قصیده فن شعر میگوید ای دوستان | قصیده فن شعر میگوید ایدوستون |
| دو استایل متفاوت دارین | دوبوست داریل و متفاوت بری |
| دو روز قبل از کریسمس ؟ | اون مفتود پش پشش |
| ساعت های کاری چیست؟ | این توری که موشیکل خب |
## Evaluation
The model can be evaluated as follows on the Persian test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fa"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-persian"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-persian | **30.12%** | **7.37%** |
| m3hrdadfi/wav2vec2-large-xlsr-persian-v2 | 33.85% | 8.79% |
| m3hrdadfi/wav2vec2-large-xlsr-persian | 34.37% | 8.98% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-persian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}ersian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-persian}},
year={2021}
}
``` |
TahaDouaji/detr-doc-table-detection | TahaDouaji | "2024-04-12T11:40:21Z" | 245,224 | 41 | transformers | [
"transformers",
"pytorch",
"safetensors",
"detr",
"object-detection",
"arxiv:2005.12872",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | "2022-03-11T15:55:14Z" | ---
tags:
- object-detection
---
# Model Card for detr-doc-table-detection
# Model Details
detr-doc-table-detection is a model trained to detect both **Bordered** and **Borderless** tables in documents, based on [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50).
- **Developed by:** Taha Douaji
- **Shared by [Optional]:** Taha Douaji
- **Model type:** Object Detection
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50)
- **Resources for more information:**
- [Model Demo Space](https://huggingface.co/spaces/trevbeers/pdf-table-extraction)
- [Associated Paper](https://arxiv.org/abs/2005.12872)
# Uses
## Direct Use
This model can be used for the task of object detection.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model was trained on ICDAR2019 Table Dataset
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
# Citation
**BibTeX:**
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
# Model Card Authors [optional]
Taha Douaji in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import DetrImageProcessor, DetrForObjectDetection
import torch
from PIL import Image
import requests
image = Image.open("IMAGE_PATH")
processor = DetrImageProcessor.from_pretrained("TahaDouaji/detr-doc-table-detection")
model = DetrForObjectDetection.from_pretrained("TahaDouaji/detr-doc-table-detection")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
# let's only keep detections with score > 0.9
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
``` |
openmmlab/upernet-swin-small | openmmlab | "2023-06-23T13:00:02Z" | 245,091 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"upernet",
"vision",
"image-segmentation",
"en",
"arxiv:1807.10221",
"arxiv:2103.14030",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2023-01-13T14:33:22Z" | ---
language: en
license: mit
tags:
- vision
- image-segmentation
model_name: openmmlab/upernet-swin-small
---
# UperNet, Swin Transformer small-sized backbone
UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al.
Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030).
Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM).
Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel.
![UperNet architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/upernet_architecture.jpg)
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for
fine-tuned versions (with various backbones) on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
|
google/flan-t5-xl | google | "2023-11-28T09:14:33Z" | 243,667 | 443 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-10-21T15:43:52Z" | ---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
tags:
- text2text-generation
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 XL
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):
![table.png](https://s3.amazonaws.com/moonup/production/uploads/1666363265279-62441d1d9fdefb55a0b7d12c.png)
## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:
![image.png](https://s3.amazonaws.com/moonup/production/uploads/1668072995230-62441d1d9fdefb55a0b7d12c.png)
For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-XL, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
llava-hf/llava-1.5-7b-hf | llava-hf | "2024-06-28T12:22:35Z" | 242,946 | 131 | transformers | [
"transformers",
"safetensors",
"llava",
"pretraining",
"image-to-text",
"en",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"region:us"
] | image-to-text | "2023-12-05T09:31:24Z" | ---
language:
- en
datasets:
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: image-to-text
inference: false
arxiv: 2304.08485
---
# LLaVA Model Card
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/FPshq08TKYD0e-qwPLDVO.png)
Below is the model card of Llava model 7b, which is copied from the original Llava model card that you can find [here](https://huggingface.co/liuhaotian/llava-v1.5-13b).
Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing)
Or check out our Spaces demo! [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces/llava-hf/llava-4bit)
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-7B was trained in September 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## How to use the model
First, make sure to have `transformers >= 4.35.3`.
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
### Using `pipeline`:
Below we used [`"llava-hf/llava-1.5-7b-hf"`](https://huggingface.co/llava-hf/llava-1.5-7b-hf) checkpoint.
```python
from transformers import pipeline
from PIL import Image
import requests
model_id = "llava-hf/llava-1.5-7b-hf"
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
>>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"}
```
### Using pure `transformers`:
Below is an example script to run generation in `float16` precision on a GPU device:
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "llava-hf/llava-1.5-7b-hf"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
### Model optimization
#### 4-bit quantization through `bitsandbytes` library
First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
```diff
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ load_in_4bit=True
)
```
#### Use Flash-Attention 2 to further speed-up generation
First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
```diff
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ use_flash_attention_2=True
).to(0)
```
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
openai/whisper-tiny | openai | "2024-02-29T10:57:33Z" | 242,519 | 205 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-09-26T06:50:30Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 17.15
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 141
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Tiny on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
7.547098647858638
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-tiny",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
amazon/chronos-t5-tiny | amazon | "2024-05-13T21:09:18Z" | 242,262 | 29 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"time series",
"forecasting",
"pretrained models",
"foundation models",
"time series foundation models",
"time-series",
"time-series-forecasting",
"arxiv:2403.07815",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | time-series-forecasting | "2024-02-28T07:51:45Z" | ---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Chronos-T5 (Tiny)
Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).
<p align="center">
<img src="figures/main-figure.png" width="100%">
<br />
<span>
Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
</span>
</p>
---
## Architecture
The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.
| Model | Parameters | Based on |
| ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
| [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
| [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
## Usage
To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running:
```
pip install git+https://github.com/amazon-science/chronos-forecasting.git
```
A minimal example showing how to perform inference using Chronos models:
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from chronos import ChronosPipeline
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-tiny",
device_map="cuda",
torch_dtype=torch.bfloat16,
)
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
context = torch.tensor(df["#Passengers"])
prediction_length = 12
forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length]
# visualize the forecast
forecast_index = range(len(df), len(df) + prediction_length)
low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0)
plt.figure(figsize=(8, 4))
plt.plot(df["#Passengers"], color="royalblue", label="historical data")
plt.plot(forecast_index, median, color="tomato", label="median forecast")
plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval")
plt.legend()
plt.grid()
plt.show()
```
## Citation
If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
```
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License.
|
timm/vit_base_patch16_224.augreg2_in21k_ft_in1k | timm | "2023-05-06T00:00:25Z" | 241,826 | 7 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-22T07:24:28Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for vit_base_patch16_224.augreg2_in21k_ft_in1k
A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k by paper authors and (re) fine-tuned on ImageNet-1k with additional augmentation and regularization by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.6
- GMACs: 16.9
- Activations (M): 16.5
- Image size: 224 x 224
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_224.augreg2_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_224.augreg2_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
StanfordAIMI/RadBERT | StanfordAIMI | "2022-11-19T01:10:33Z" | 240,999 | 21 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"biobert",
"radbert",
"language-model",
"uncased",
"radiology",
"biomedical",
"en",
"dataset:wikipedia",
"dataset:bookscorpus",
"dataset:pubmed",
"dataset:radreports",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-05-04T00:48:50Z" | ---
widget:
- text: "low lung volumes, [MASK] pulmonary vascularity."
tags:
- fill-mask
- pytorch
- transformers
- bert
- biobert
- radbert
- language-model
- uncased
- radiology
- biomedical
datasets:
- wikipedia
- bookscorpus
- pubmed
- radreports
language:
- en
license: mit
---
RadBERT was continuously pre-trained on radiology reports from a BioBERT initialization.
## Citation
```bibtex
@article{chambon_cook_langlotz_2022,
title={Improved fine-tuning of in-domain transformer model for inferring COVID-19 presence in multi-institutional radiology reports},
DOI={10.1007/s10278-022-00714-8}, journal={Journal of Digital Imaging},
author={Chambon, Pierre and Cook, Tessa S. and Langlotz, Curtis P.},
year={2022}
}
``` |
flair/ner-english | flair | "2021-03-02T22:11:28Z" | 239,738 | 29 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2003",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- conll2003
widget:
- text: "George Washington went to Washington"
---
## English NER in Flair (default model)
This is the standard 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **93,06** (corrected CoNLL-03)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-english")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.9968)]
Span [5]: "Washington" [− Labels: LOC (0.9994)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_03()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# GloVe embeddings
WordEmbeddings('glove'),
# contextual string embeddings, forward
FlairEmbeddings('news-forward'),
# contextual string embeddings, backward
FlairEmbeddings('news-backward'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-english',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
microsoft/speecht5_tts | microsoft | "2023-11-08T14:37:23Z" | 238,190 | 566 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"audio",
"text-to-speech",
"dataset:libritts",
"arxiv:2110.07205",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-02-02T12:56:54Z" | ---
license: mit
tags:
- audio
- text-to-speech
datasets:
- libritts
---
# SpeechT5 (TTS task)
SpeechT5 model fine-tuned for speech synthesis (text-to-speech) on LibriTTS.
This model was introduced in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-tts). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE).
## Model Description
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.
Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder.
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
- **Developed by:** Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
- **Shared by [optional]:** [Matthijs Hollemans](https://huggingface.co/Matthijs)
- **Model type:** text-to-speech
- **Language(s) (NLP):** [More Information Needed]
- **License:** [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE)
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/microsoft/SpeechT5/]
- **Paper:** [https://arxiv.org/pdf/2110.07205.pdf]
- **Blog Post:** [https://huggingface.co/blog/speecht5]
- **Demo:** [https://huggingface.co/spaces/Matthijs/speecht5-tts-demo]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## 🤗 Transformers Usage
You can run SpeechT5 TTS locally with the 🤗 Transformers library.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers), sentencepiece, soundfile and datasets(optional):
```
pip install --upgrade pip
pip install --upgrade transformers sentencepiece datasets[audio]
```
2. Run inference via the `Text-to-Speech` (TTS) pipeline. You can access the SpeechT5 model via the TTS pipeline in just a few lines of code!
```python
from transformers import pipeline
from datasets import load_dataset
import soundfile as sf
synthesiser = pipeline("text-to-speech", "microsoft/speecht5_tts")
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embedding = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0)
# You can replace this embedding with your own as well.
speech = synthesiser("Hello, my dog is cooler than you!", forward_params={"speaker_embeddings": speaker_embedding})
sf.write("speech.wav", speech["audio"], samplerate=speech["sampling_rate"])
```
3. Run inference via the Transformers modelling code - You can use the processor + generate code to convert text into a mono 16 kHz speech waveform for more fine-grained control.
```python
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
from datasets import load_dataset
import torch
import soundfile as sf
from datasets import load_dataset
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
inputs = processor(text="Hello, my dog is cute.", return_tensors="pt")
# load xvector containing speaker's voice characteristics from a dataset
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0)
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
sf.write("speech.wav", speech.numpy(), samplerate=16000)
```
### Fine-tuning the Model
Refer to [this Colab notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ) for an example of how to fine-tune SpeechT5 for TTS on a different dataset or a new language.
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model for speech synthesis. See the [model hub](https://huggingface.co/models?search=speecht5) to look for fine-tuned versions on a task that interests you.
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
LibriTTS
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing [optional]
Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text.
### Training hyperparameters
- **Precision:** [More Information Needed] <!--fp16, bf16, fp8, fp32 -->
- **Regime:** [More Information Needed] <!--mixed precision or not -->
### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets.
After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{ao-etal-2022-speecht5,
title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing},
author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {May},
year = {2022},
pages={5723--5738},
}
```
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
- **text-to-speech** to synthesize audio
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
# Model Card Contact
[More Information Needed]
|
microsoft/wavlm-base-plus-sd | microsoft | "2022-03-25T12:06:46Z" | 237,056 | 7 | transformers | [
"transformers",
"pytorch",
"wavlm",
"audio-frame-classification",
"speech",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.13900",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language:
- en
tags:
- speech
---
# WavLM-Base-Plus for Speaker Diarization
[Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
**Abstract**
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
# Fine-tuning details
The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs.
# Usage
## Speaker Diarization
```python
from transformers import Wav2Vec2FeatureExtractor, WavLMForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-plus-sd')
model = WavLMForAudioFrameClassification.from_pretrained('microsoft/wavlm-base-plus-sd')
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png) |
Salesforce/blip2-opt-2.7b | Salesforce | "2024-03-22T11:58:17Z" | 235,950 | 278 | transformers | [
"transformers",
"pytorch",
"safetensors",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | "2023-02-06T16:21:49Z" | ---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
---
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
### Memory requirements
The memory requirements differ based on the precision one uses. One can use 4-bit inference using [Bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes), which greatly reduce the memory requirements.
| dtype | Largest Layer or Residual Group | Total Size | Training using Adam |
|-------------------|---------------------------------|------------|----------------------|
| float32 | 490.94 MB | 14.43 GB | 57.72 GB |
| float16/bfloat16 | 245.47 MB | 7.21 GB | 28.86 GB |
| int8 | 122.73 MB | 3.61 GB | 14.43 GB |
| int4 | 61.37 MB | 1.8 GB | 7.21 GB |
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details> |
microsoft/swinv2-tiny-patch4-window16-256 | microsoft | "2022-12-10T10:09:55Z" | 235,372 | 1 | transformers | [
"transformers",
"pytorch",
"swinv2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-06-14T06:17:52Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer v2 (tiny-sized model)
Swin Transformer v2 model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png)
[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window16-256")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-tiny-patch4-window16-256")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-09883,
author = {Ze Liu and
Han Hu and
Yutong Lin and
Zhuliang Yao and
Zhenda Xie and
Yixuan Wei and
Jia Ning and
Yue Cao and
Zheng Zhang and
Li Dong and
Furu Wei and
Baining Guo},
title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
journal = {CoRR},
volume = {abs/2111.09883},
year = {2021},
url = {https://arxiv.org/abs/2111.09883},
eprinttype = {arXiv},
eprint = {2111.09883},
timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
facebook/dino-vitb16 | facebook | "2023-05-22T07:04:00Z" | 233,479 | 100 | transformers | [
"transformers",
"pytorch",
"tf",
"vit",
"image-feature-extraction",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2104.14294",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (base-sized model, patch size 16) trained using DINO
Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino).
Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTImageProcessor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained('facebook/dino-vitb16')
model = ViTModel.from_pretrained('facebook/dino-vitb16')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-14294,
author = {Mathilde Caron and
Hugo Touvron and
Ishan Misra and
Herv{\'{e}} J{\'{e}}gou and
Julien Mairal and
Piotr Bojanowski and
Armand Joulin},
title = {Emerging Properties in Self-Supervised Vision Transformers},
journal = {CoRR},
volume = {abs/2104.14294},
year = {2021},
url = {https://arxiv.org/abs/2104.14294},
archivePrefix = {arXiv},
eprint = {2104.14294},
timestamp = {Tue, 04 May 2021 15:12:43 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
robinsyihab/Sidrap-7B-v2-GPTQ-4bit | robinsyihab | "2023-11-29T17:17:10Z" | 231,780 | 2 | transformers | [
"transformers",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-29T16:07:19Z" | ---
license: apache-2.0
---
# Sidrap-7B-v2-GPTQ-4bit
Sidrap-7B-v2-GPTQ-4bit is an 4-bit quantized model of Sidrap-7B-v2, which is one of the best open model LLM bahasa Indonesia available today. This model has been quantized using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) to get smaller model that allows us to run in a lower resource environment with faster inference. The quantization uses random subset of original training data to "calibrate" the weights resulting in an optimally compact model with minimall loss in accuracy.
## Usage
The fastest way to use this model, use [AutoGPTQ-API](https://github.com/anvie/gptq-api):
```bash
python -m gptqapi.server robinsyihab/Sidrap-7B-v2-GPTQ-4bit
```
Or use AutoGPTQ directly:
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM
model_id = "robinsyihab/Sidrap-7B-v2-GPTQ-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_id,
device="cuda:0",
inject_fused_mlp=True,
inject_fused_attention=True,
trust_remote_code=True)
chat = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto")
prompt = ("<s>[INST] <<SYS>>\nAnda adalah asisten yang suka membantu, penuh hormat, dan jujur. Selalu jawab semaksimal mungkin, sambil tetap aman. Jawaban Anda tidak boleh berisi konten berbahaya, tidak etis, rasis, seksis, beracun, atau ilegal. Harap pastikan bahwa tanggapan Anda tidak memihak secara sosial dan bersifat positif.\n\
Jika sebuah pertanyaan tidak masuk akal, atau tidak koheren secara faktual, jelaskan alasannya daripada menjawab sesuatu yang tidak benar. Jika Anda tidak mengetahui jawaban atas sebuah pertanyaan, mohon jangan membagikan informasi palsu.\n"
"<</SYS>>\n\n"
"Siapa penulis kitab alfiyah? [/INST]\n"
)
sequences = chat(prompt, num_beams=2, max_length=max_size, top_k=10, num_return_sequences=1)
print(sequences[0]['generated_text'])
```
## License
Sidrap-7B-v2-GPTQ is licensed under the Apache 2.0 License.
## Author
[] Robin Syihab ([@anvie](https://x.com/anvie))
|
vikp/surya_det2 | vikp | "2024-02-29T21:05:22Z" | 229,433 | 2 | transformers | [
"transformers",
"safetensors",
"segformer",
"endpoints_compatible",
"region:us"
] | null | "2024-02-29T20:54:29Z" | Entry not found |
openai/whisper-tiny.en | openai | "2024-01-22T17:55:12Z" | 228,952 | 83 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-09-26T06:57:49Z" | ---
language:
- en
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-tiny.en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 8.4372112320138
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 14.857607503498355
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
This checkpoint is an *English-only* model, meaning it can be used for English speech recognition. Multilingual speech
recognition or speech translation is possible through use of a multilingual checkpoint.
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
## Transcription
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
## Evaluation
This code snippet shows how to evaluate Whisper tiny.en on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
5.655609406528749
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-tiny.en",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
openai-community/gpt2-medium | openai-community | "2024-02-19T12:39:04Z" | 228,775 | 132 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | ---
language: en
license: mit
---
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"},
{'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"},
{'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a military'},
{'generated_text': 'The man worked as a salesman in Mexico and eventually'},
{'generated_text': 'The man worked as a supervisor at the department for'},
{'generated_text': 'The man worked as a cleaner for the same corporation'},
{'generated_text': 'The man worked as a barman and was involved'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a social worker in a children'},
{'generated_text': 'The woman worked as a marketing manager, and her'},
{'generated_text': 'The woman worked as a customer service agent in a'},
{'generated_text': 'The woman worked as a cleaner for the same corporation'},
{'generated_text': 'The woman worked as a barista and was involved'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team. |
facebook/convnextv2-tiny-1k-224 | facebook | "2023-11-28T09:42:05Z" | 227,688 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"convnextv2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2301.00808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-02-17T14:03:53Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXt V2 (tiny-sized model)
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-1K dataset at resolution 224x224. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2).
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnextv2_architecture.png)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-tiny-1k-224")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-tiny-1k-224")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2301-00808,
author = {Sanghyun Woo and
Shoubhik Debnath and
Ronghang Hu and
Xinlei Chen and
Zhuang Liu and
In So Kweon and
Saining Xie},
title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders},
journal = {CoRR},
volume = {abs/2301.00808},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.00808},
doi = {10.48550/arXiv.2301.00808},
eprinttype = {arXiv},
eprint = {2301.00808},
timestamp = {Tue, 10 Jan 2023 15:10:12 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
NousResearch/Llama-2-7b-chat-hf | NousResearch | "2024-06-03T19:23:12Z" | 227,541 | 156 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-18T19:45:53Z" | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
benjamin/wtp-bert-mini | benjamin | "2023-07-19T11:39:05Z" | 226,621 | 4 | transformers | [
"transformers",
"pytorch",
"onnx",
"bert-char",
"token-classification",
"multilingual",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"si",
"sk",
"sl",
"sq",
"sr",
"sv",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-05-06T14:46:19Z" | ---
license: mit
language:
- multilingual
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- pa
- pl
- ps
- pt
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- tg
- th
- tr
- uk
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
---
# wtp-bert-mini
Model for [`wtpsplit`](https://github.com/bminixhofer/wtpsplit). |
HuggingFaceH4/zephyr-7b-beta | HuggingFaceH4 | "2024-02-29T11:00:39Z" | 226,294 | 1,507 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2310.16944",
"base_model:mistralai/Mistral-7B-v0.1",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-26T11:25:49Z" | ---
tags:
- generated_from_trainer
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
base_model: mistralai/Mistral-7B-v0.1
widget:
- example_title: Pirate!
messages:
- role: system
content: You are a pirate chatbot who always responds with Arr!
- role: user
content: "There's a llama on my lawn, how can I get rid of him?"
output:
text: >-
Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare
sight, but I've got a plan that might help ye get rid of 'im. Ye'll need
to gather some carrots and hay, and then lure the llama away with the
promise of a tasty treat. Once he's gone, ye can clean up yer lawn and
enjoy the peace and quiet once again. But beware, me hearty, for there
may be more llamas where that one came from! Arr!
pipeline_tag: text-generation
model-index:
- name: zephyr-7b-beta
results:
# AI2 Reasoning Challenge (25-Shot)
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
name: normalized accuracy
value: 62.03071672354948
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
# HellaSwag (10-shot)
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
name: normalized accuracy
value: 84.35570603465445
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
# DROP (3-shot)
- task:
type: text-generation
name: Text Generation
dataset:
name: Drop (3-Shot)
type: drop
split: validation
args:
num_few_shot: 3
metrics:
- type: f1
name: f1 score
value: 9.662437080536909
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
# TruthfulQA (0-shot)
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.44916942762855
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
# GSM8k (5-shot)
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
name: accuracy
value: 12.736921910538287
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
# MMLU (5-Shot)
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
name: accuracy
value: 61.07
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
# Winogrande (5-shot)
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
name: accuracy
value: 77.74269928966061
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
# AlpacaEval (taken from model card)
- task:
type: text-generation
name: Text Generation
dataset:
name: AlpacaEval
type: tatsu-lab/alpaca_eval
metrics:
- type: unknown
name: win rate
value: 0.9060
source:
url: https://tatsu-lab.github.io/alpaca_eval/
# MT-Bench (taken from model card)
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
name: score
value: 7.34
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B β
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
- **Chatbot Arena:** Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: http://arena.lmsys.org
## Performance
At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks:
| Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
|-------------|-----|----|---------------|--------------|
| StableLM-Tuned-α | 7B| dSFT |2.75| -|
| MPT-Chat | 7B |dSFT |5.42| -|
| Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
| Mistral-Instructv0.1 | 7B| - | 6.84 |-|
| Zephyr-7b-α |7B| dDPO| 6.88| -|
| **Zephyr-7b-β** 🪁 | **7B** | **dDPO** | **7.34** | **90.60** |
| Falcon-Instruct | 40B |dSFT |5.17 |45.71|
| Guanaco | 65B | SFT |6.41| 71.80|
| Llama2-Chat | 70B |RLHF |6.86| 92.66|
| Vicuna v1.3 | 33B |dSFT |7.12 |88.99|
| WizardLM v1.0 | 70B |dSFT |7.71 |-|
| Xwin-LM v0.1 | 70B |dPPO |- |95.57|
| GPT-3.5-turbo | - |RLHF |7.94 |89.37|
| Claude 2 | - |RLHF |8.06| 91.36|
| GPT-4 | -| RLHF |8.99| 95.28|
In particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/raxvt5ma16d7T23my34WC.png)
However, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap.
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
You can find the datasets used for training Zephyr-7B-β [here](https://huggingface.co/collections/HuggingFaceH4/zephyr-7b-6538c6d6d5ddd1cbb1744a66)
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-beta", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training and evaluation data
During DPO training, this model achieves the following results on the evaluation set:
- Loss: 0.7496
- Rewards/chosen: -4.5221
- Rewards/rejected: -8.3184
- Rewards/accuracies: 0.7812
- Rewards/margins: 3.7963
- Logps/rejected: -340.1541
- Logps/chosen: -299.4561
- Logits/rejected: -2.3081
- Logits/chosen: -2.3531
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
The table below shows the full set of DPO training metrics:
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6284 | 0.05 | 100 | 0.6098 | 0.0425 | -0.1872 | 0.7344 | 0.2297 | -258.8416 | -253.8099 | -2.7976 | -2.8234 |
| 0.4908 | 0.1 | 200 | 0.5426 | -0.0279 | -0.6842 | 0.75 | 0.6563 | -263.8124 | -254.5145 | -2.7719 | -2.7960 |
| 0.5264 | 0.15 | 300 | 0.5324 | 0.0414 | -0.9793 | 0.7656 | 1.0207 | -266.7627 | -253.8209 | -2.7892 | -2.8122 |
| 0.5536 | 0.21 | 400 | 0.4957 | -0.0185 | -1.5276 | 0.7969 | 1.5091 | -272.2460 | -254.4203 | -2.8542 | -2.8764 |
| 0.5362 | 0.26 | 500 | 0.5031 | -0.2630 | -1.5917 | 0.7812 | 1.3287 | -272.8869 | -256.8653 | -2.8702 | -2.8958 |
| 0.5966 | 0.31 | 600 | 0.5963 | -0.2993 | -1.6491 | 0.7812 | 1.3499 | -273.4614 | -257.2279 | -2.8778 | -2.8986 |
| 0.5014 | 0.36 | 700 | 0.5382 | -0.2859 | -1.4750 | 0.75 | 1.1891 | -271.7204 | -257.0942 | -2.7659 | -2.7869 |
| 0.5334 | 0.41 | 800 | 0.5677 | -0.4289 | -1.8968 | 0.7969 | 1.4679 | -275.9378 | -258.5242 | -2.7053 | -2.7265 |
| 0.5251 | 0.46 | 900 | 0.5772 | -0.2116 | -1.3107 | 0.7344 | 1.0991 | -270.0768 | -256.3507 | -2.8463 | -2.8662 |
| 0.5205 | 0.52 | 1000 | 0.5262 | -0.3792 | -1.8585 | 0.7188 | 1.4793 | -275.5552 | -258.0276 | -2.7893 | -2.7979 |
| 0.5094 | 0.57 | 1100 | 0.5433 | -0.6279 | -1.9368 | 0.7969 | 1.3089 | -276.3377 | -260.5136 | -2.7453 | -2.7536 |
| 0.5837 | 0.62 | 1200 | 0.5349 | -0.3780 | -1.9584 | 0.7656 | 1.5804 | -276.5542 | -258.0154 | -2.7643 | -2.7756 |
| 0.5214 | 0.67 | 1300 | 0.5732 | -1.0055 | -2.2306 | 0.7656 | 1.2251 | -279.2761 | -264.2903 | -2.6986 | -2.7113 |
| 0.6914 | 0.72 | 1400 | 0.5137 | -0.6912 | -2.1775 | 0.7969 | 1.4863 | -278.7448 | -261.1467 | -2.7166 | -2.7275 |
| 0.4655 | 0.77 | 1500 | 0.5090 | -0.7987 | -2.2930 | 0.7031 | 1.4943 | -279.8999 | -262.2220 | -2.6651 | -2.6838 |
| 0.5731 | 0.83 | 1600 | 0.5312 | -0.8253 | -2.3520 | 0.7812 | 1.5268 | -280.4902 | -262.4876 | -2.6543 | -2.6728 |
| 0.5233 | 0.88 | 1700 | 0.5206 | -0.4573 | -2.0951 | 0.7812 | 1.6377 | -277.9205 | -258.8084 | -2.6870 | -2.7097 |
| 0.5593 | 0.93 | 1800 | 0.5231 | -0.5508 | -2.2000 | 0.7969 | 1.6492 | -278.9703 | -259.7433 | -2.6221 | -2.6519 |
| 0.4967 | 0.98 | 1900 | 0.5290 | -0.5340 | -1.9570 | 0.8281 | 1.4230 | -276.5395 | -259.5749 | -2.6564 | -2.6878 |
| 0.0921 | 1.03 | 2000 | 0.5368 | -1.1376 | -3.1615 | 0.7812 | 2.0239 | -288.5854 | -265.6111 | -2.6040 | -2.6345 |
| 0.0733 | 1.08 | 2100 | 0.5453 | -1.1045 | -3.4451 | 0.7656 | 2.3406 | -291.4208 | -265.2799 | -2.6289 | -2.6595 |
| 0.0972 | 1.14 | 2200 | 0.5571 | -1.6915 | -3.9823 | 0.8125 | 2.2908 | -296.7934 | -271.1505 | -2.6471 | -2.6709 |
| 0.1058 | 1.19 | 2300 | 0.5789 | -1.0621 | -3.8941 | 0.7969 | 2.8319 | -295.9106 | -264.8563 | -2.5527 | -2.5798 |
| 0.2423 | 1.24 | 2400 | 0.5455 | -1.1963 | -3.5590 | 0.7812 | 2.3627 | -292.5599 | -266.1981 | -2.5414 | -2.5784 |
| 0.1177 | 1.29 | 2500 | 0.5889 | -1.8141 | -4.3942 | 0.7969 | 2.5801 | -300.9120 | -272.3761 | -2.4802 | -2.5189 |
| 0.1213 | 1.34 | 2600 | 0.5683 | -1.4608 | -3.8420 | 0.8125 | 2.3812 | -295.3901 | -268.8436 | -2.4774 | -2.5207 |
| 0.0889 | 1.39 | 2700 | 0.5890 | -1.6007 | -3.7337 | 0.7812 | 2.1330 | -294.3068 | -270.2423 | -2.4123 | -2.4522 |
| 0.0995 | 1.45 | 2800 | 0.6073 | -1.5519 | -3.8362 | 0.8281 | 2.2843 | -295.3315 | -269.7538 | -2.4685 | -2.5050 |
| 0.1145 | 1.5 | 2900 | 0.5790 | -1.7939 | -4.2876 | 0.8438 | 2.4937 | -299.8461 | -272.1744 | -2.4272 | -2.4674 |
| 0.0644 | 1.55 | 3000 | 0.5735 | -1.7285 | -4.2051 | 0.8125 | 2.4766 | -299.0209 | -271.5201 | -2.4193 | -2.4574 |
| 0.0798 | 1.6 | 3100 | 0.5537 | -1.7226 | -4.2850 | 0.8438 | 2.5624 | -299.8200 | -271.4610 | -2.5367 | -2.5696 |
| 0.1013 | 1.65 | 3200 | 0.5575 | -1.5715 | -3.9813 | 0.875 | 2.4098 | -296.7825 | -269.9498 | -2.4926 | -2.5267 |
| 0.1254 | 1.7 | 3300 | 0.5905 | -1.6412 | -4.4703 | 0.8594 | 2.8291 | -301.6730 | -270.6473 | -2.5017 | -2.5340 |
| 0.085 | 1.76 | 3400 | 0.6133 | -1.9159 | -4.6760 | 0.8438 | 2.7601 | -303.7296 | -273.3941 | -2.4614 | -2.4960 |
| 0.065 | 1.81 | 3500 | 0.6074 | -1.8237 | -4.3525 | 0.8594 | 2.5288 | -300.4951 | -272.4724 | -2.4597 | -2.5004 |
| 0.0755 | 1.86 | 3600 | 0.5836 | -1.9252 | -4.4005 | 0.8125 | 2.4753 | -300.9748 | -273.4872 | -2.4327 | -2.4716 |
| 0.0746 | 1.91 | 3700 | 0.5789 | -1.9280 | -4.4906 | 0.8125 | 2.5626 | -301.8762 | -273.5149 | -2.4686 | -2.5115 |
| 0.1348 | 1.96 | 3800 | 0.6015 | -1.8658 | -4.2428 | 0.8281 | 2.3769 | -299.3976 | -272.8936 | -2.4943 | -2.5393 |
| 0.0217 | 2.01 | 3900 | 0.6122 | -2.3335 | -4.9229 | 0.8281 | 2.5894 | -306.1988 | -277.5699 | -2.4841 | -2.5272 |
| 0.0219 | 2.07 | 4000 | 0.6522 | -2.9890 | -6.0164 | 0.8281 | 3.0274 | -317.1334 | -284.1248 | -2.4105 | -2.4545 |
| 0.0119 | 2.12 | 4100 | 0.6922 | -3.4777 | -6.6749 | 0.7969 | 3.1972 | -323.7187 | -289.0121 | -2.4272 | -2.4699 |
| 0.0153 | 2.17 | 4200 | 0.6993 | -3.2406 | -6.6775 | 0.7969 | 3.4369 | -323.7453 | -286.6413 | -2.4047 | -2.4465 |
| 0.011 | 2.22 | 4300 | 0.7178 | -3.7991 | -7.4397 | 0.7656 | 3.6406 | -331.3667 | -292.2260 | -2.3843 | -2.4290 |
| 0.0072 | 2.27 | 4400 | 0.6840 | -3.3269 | -6.8021 | 0.8125 | 3.4752 | -324.9908 | -287.5042 | -2.4095 | -2.4536 |
| 0.0197 | 2.32 | 4500 | 0.7013 | -3.6890 | -7.3014 | 0.8125 | 3.6124 | -329.9841 | -291.1250 | -2.4118 | -2.4543 |
| 0.0182 | 2.37 | 4600 | 0.7476 | -3.8994 | -7.5366 | 0.8281 | 3.6372 | -332.3356 | -293.2291 | -2.4163 | -2.4565 |
| 0.0125 | 2.43 | 4700 | 0.7199 | -4.0560 | -7.5765 | 0.8438 | 3.5204 | -332.7345 | -294.7952 | -2.3699 | -2.4100 |
| 0.0082 | 2.48 | 4800 | 0.7048 | -3.6613 | -7.1356 | 0.875 | 3.4743 | -328.3255 | -290.8477 | -2.3925 | -2.4303 |
| 0.0118 | 2.53 | 4900 | 0.6976 | -3.7908 | -7.3152 | 0.8125 | 3.5244 | -330.1224 | -292.1431 | -2.3633 | -2.4047 |
| 0.0118 | 2.58 | 5000 | 0.7198 | -3.9049 | -7.5557 | 0.8281 | 3.6508 | -332.5271 | -293.2844 | -2.3764 | -2.4194 |
| 0.006 | 2.63 | 5100 | 0.7506 | -4.2118 | -7.9149 | 0.8125 | 3.7032 | -336.1194 | -296.3530 | -2.3407 | -2.3860 |
| 0.0143 | 2.68 | 5200 | 0.7408 | -4.2433 | -7.9802 | 0.8125 | 3.7369 | -336.7721 | -296.6682 | -2.3509 | -2.3946 |
| 0.0057 | 2.74 | 5300 | 0.7552 | -4.3392 | -8.0831 | 0.7969 | 3.7439 | -337.8013 | -297.6275 | -2.3388 | -2.3842 |
| 0.0138 | 2.79 | 5400 | 0.7404 | -4.2395 | -7.9762 | 0.8125 | 3.7367 | -336.7322 | -296.6304 | -2.3286 | -2.3737 |
| 0.0079 | 2.84 | 5500 | 0.7525 | -4.4466 | -8.2196 | 0.7812 | 3.7731 | -339.1662 | -298.7007 | -2.3200 | -2.3641 |
| 0.0077 | 2.89 | 5600 | 0.7520 | -4.5586 | -8.3485 | 0.7969 | 3.7899 | -340.4545 | -299.8206 | -2.3078 | -2.3517 |
| 0.0094 | 2.94 | 5700 | 0.7527 | -4.5542 | -8.3509 | 0.7812 | 3.7967 | -340.4790 | -299.7773 | -2.3062 | -2.3510 |
| 0.0054 | 2.99 | 5800 | 0.7520 | -4.5169 | -8.3079 | 0.7812 | 3.7911 | -340.0493 | -299.4038 | -2.3081 | -2.3530 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.14.0
## Citation
If you find Zephyr-7B-β is useful in your work, please cite it with:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-beta)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 52.15 |
| ARC (25-shot) | 62.03 |
| HellaSwag (10-shot) | 84.36 |
| MMLU (5-shot) | 61.07 |
| TruthfulQA (0-shot) | 57.45 |
| Winogrande (5-shot) | 77.74 |
| GSM8K (5-shot) | 12.74 |
| DROP (3-shot) | 9.66 | |
MaziyarPanahi/WizardLM-70B-V1.0 | MaziyarPanahi | "2024-04-23T07:50:03Z" | 225,530 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"wizardlm",
"finetuned",
"en",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-20T19:40:39Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- wizardlm
- llama
- finetuned
---
<!-- original-model-card start -->
# Original model card: WizardLM's WizardLM 70B V1.0
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## Unofficial Video Introductions
Thanks to the enthusiastic friends, their video introductions are more lively and interesting.
1. [NEW WizardLM 70b 🔥 Giant Model...Insane Performance](https://www.youtube.com/watch?v=WdpiIXrO4_o)
2. [GET WizardLM NOW! 7B LLM KING That Can Beat ChatGPT! I'm IMPRESSED!](https://www.youtube.com/watch?v=SaJ8wyKMBds)
3. [WizardLM: Enhancing Large Language Models to Follow Complex Instructions](https://www.youtube.com/watch?v=I6sER-qivYk)
4. [WizardCoder AI Is The NEW ChatGPT's Coding TWIN!](https://www.youtube.com/watch?v=XjsyHrmd3Xo)
## News
- 🔥🔥🔥[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
- [2023/06/16] We released **WizardCoder-15B-V1.0** , which surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
- 🔥 [08/11/2023] We release **WizardMath** Models.
- 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
</font>
- 🔥🔥🔥 [08/09/2023] We released **WizardLM-70B-V1.0** model.
**Github Repo**: https://github.com/nlpxucan/WizardLM
**Twitter**: https://twitter.com/WizardLM_AI/status/1689270108747976704
**Discord**: https://discord.gg/bpmeZD7V
❗<b>Note for model system prompts usage:</b>
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
## Inference WizardLM Demo Script
We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
Please cite the paper if you use the data or code from WizardLM.
```
@article{xu2023wizardlm,
title={Wizardlm: Empowering large language models to follow complex instructions},
author={Xu, Can and Sun, Qingfeng and Zheng, Kai and Geng, Xiubo and Zhao, Pu and Feng, Jiazhan and Tao, Chongyang and Jiang, Daxin},
journal={arXiv preprint arXiv:2304.12244},
year={2023}
}
```
❗<b>To commen concern about dataset:</b>
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding.
<!-- original-model-card end --> |
Alibaba-NLP/gte-base-en-v1.5 | Alibaba-NLP | "2024-04-26T13:53:41Z" | 224,876 | 24 | transformers | [
"transformers",
"onnx",
"safetensors",
"new",
"feature-extraction",
"sentence-transformers",
"gte",
"mteb",
"transformers.js",
"sentence-similarity",
"custom_code",
"en",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"region:us"
] | sentence-similarity | "2024-04-20T02:53:42Z" | ---
library_name: transformers
tags:
- sentence-transformers
- gte
- mteb
- transformers.js
- sentence-similarity
license: apache-2.0
language:
- en
model-index:
- name: gte-base-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.7910447761194
- type: ap
value: 37.053785713650626
- type: f1
value: 68.51101510998551
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.016875
- type: ap
value: 89.17750268426342
- type: f1
value: 92.9970977240524
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.312000000000005
- type: f1
value: 52.98175784163017
- task:
type: Retrieval
dataset:
type: mteb/arguana
name: MTEB ArguAna
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 38.193
- type: map_at_10
value: 54.848
- type: map_at_100
value: 55.388000000000005
- type: map_at_1000
value: 55.388999999999996
- type: map_at_3
value: 50.427
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 39.047
- type: mrr_at_10
value: 55.153
- type: mrr_at_100
value: 55.686
- type: mrr_at_1000
value: 55.688
- type: mrr_at_3
value: 50.676
- type: mrr_at_5
value: 53.417
- type: ndcg_at_1
value: 38.193
- type: ndcg_at_10
value: 63.486
- type: ndcg_at_100
value: 65.58
- type: ndcg_at_1000
value: 65.61
- type: ndcg_at_3
value: 54.494
- type: ndcg_at_5
value: 59.339
- type: precision_at_1
value: 38.193
- type: precision_at_10
value: 9.075
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.096
- type: precision_at_5
value: 15.619
- type: recall_at_1
value: 38.193
- type: recall_at_10
value: 90.754
- type: recall_at_100
value: 99.431
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.28699999999999
- type: recall_at_5
value: 78.094
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.508221208908964
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.04668382560096
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.828759903716815
- type: mrr
value: 74.37343358395991
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.03673698773017
- type: cos_sim_spearman
value: 83.6470866785058
- type: euclidean_pearson
value: 82.64048673096565
- type: euclidean_spearman
value: 83.63142367101115
- type: manhattan_pearson
value: 82.71493099760228
- type: manhattan_spearman
value: 83.60491704294326
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.73376623376623
- type: f1
value: 86.70294049278262
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.31923804167062
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 37.552547125348454
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-android
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 30.567
- type: map_at_10
value: 41.269
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.84
- type: map_at_3
value: 37.567
- type: map_at_5
value: 39.706
- type: mrr_at_1
value: 37.053000000000004
- type: mrr_at_10
value: 46.900999999999996
- type: mrr_at_100
value: 47.662
- type: mrr_at_1000
value: 47.713
- type: mrr_at_3
value: 43.801
- type: mrr_at_5
value: 45.689
- type: ndcg_at_1
value: 37.053000000000004
- type: ndcg_at_10
value: 47.73
- type: ndcg_at_100
value: 53.128
- type: ndcg_at_1000
value: 55.300000000000004
- type: ndcg_at_3
value: 42.046
- type: ndcg_at_5
value: 44.782
- type: precision_at_1
value: 37.053000000000004
- type: precision_at_10
value: 9.142
- type: precision_at_100
value: 1.485
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.535
- type: recall_at_1
value: 30.567
- type: recall_at_10
value: 60.602999999999994
- type: recall_at_100
value: 83.22800000000001
- type: recall_at_1000
value: 96.696
- type: recall_at_3
value: 44.336999999999996
- type: recall_at_5
value: 51.949
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-english
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 28.538000000000004
- type: map_at_10
value: 38.757999999999996
- type: map_at_100
value: 40.129
- type: map_at_1000
value: 40.262
- type: map_at_3
value: 35.866
- type: map_at_5
value: 37.417
- type: mrr_at_1
value: 36.051
- type: mrr_at_10
value: 44.868
- type: mrr_at_100
value: 45.568999999999996
- type: mrr_at_1000
value: 45.615
- type: mrr_at_3
value: 42.558
- type: mrr_at_5
value: 43.883
- type: ndcg_at_1
value: 36.051
- type: ndcg_at_10
value: 44.584
- type: ndcg_at_100
value: 49.356
- type: ndcg_at_1000
value: 51.39
- type: ndcg_at_3
value: 40.389
- type: ndcg_at_5
value: 42.14
- type: precision_at_1
value: 36.051
- type: precision_at_10
value: 8.446
- type: precision_at_100
value: 1.411
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 19.639
- type: precision_at_5
value: 13.796
- type: recall_at_1
value: 28.538000000000004
- type: recall_at_10
value: 54.99000000000001
- type: recall_at_100
value: 75.098
- type: recall_at_1000
value: 87.848
- type: recall_at_3
value: 42.236000000000004
- type: recall_at_5
value: 47.377
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-gaming
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 37.188
- type: map_at_10
value: 50.861000000000004
- type: map_at_100
value: 51.917
- type: map_at_1000
value: 51.964999999999996
- type: map_at_3
value: 47.144000000000005
- type: map_at_5
value: 49.417
- type: mrr_at_1
value: 42.571
- type: mrr_at_10
value: 54.086999999999996
- type: mrr_at_100
value: 54.739000000000004
- type: mrr_at_1000
value: 54.762
- type: mrr_at_3
value: 51.285000000000004
- type: mrr_at_5
value: 53.0
- type: ndcg_at_1
value: 42.571
- type: ndcg_at_10
value: 57.282
- type: ndcg_at_100
value: 61.477000000000004
- type: ndcg_at_1000
value: 62.426
- type: ndcg_at_3
value: 51.0
- type: ndcg_at_5
value: 54.346000000000004
- type: precision_at_1
value: 42.571
- type: precision_at_10
value: 9.467
- type: precision_at_100
value: 1.2550000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 23.114
- type: precision_at_5
value: 16.250999999999998
- type: recall_at_1
value: 37.188
- type: recall_at_10
value: 73.068
- type: recall_at_100
value: 91.203
- type: recall_at_1000
value: 97.916
- type: recall_at_3
value: 56.552
- type: recall_at_5
value: 64.567
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-gis
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 25.041000000000004
- type: map_at_10
value: 33.86
- type: map_at_100
value: 34.988
- type: map_at_1000
value: 35.064
- type: map_at_3
value: 31.049
- type: map_at_5
value: 32.845
- type: mrr_at_1
value: 26.893
- type: mrr_at_10
value: 35.594
- type: mrr_at_100
value: 36.617
- type: mrr_at_1000
value: 36.671
- type: mrr_at_3
value: 33.051
- type: mrr_at_5
value: 34.61
- type: ndcg_at_1
value: 26.893
- type: ndcg_at_10
value: 38.674
- type: ndcg_at_100
value: 44.178
- type: ndcg_at_1000
value: 46.089999999999996
- type: ndcg_at_3
value: 33.485
- type: ndcg_at_5
value: 36.402
- type: precision_at_1
value: 26.893
- type: precision_at_10
value: 5.989
- type: precision_at_100
value: 0.918
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 14.2
- type: precision_at_5
value: 10.26
- type: recall_at_1
value: 25.041000000000004
- type: recall_at_10
value: 51.666000000000004
- type: recall_at_100
value: 76.896
- type: recall_at_1000
value: 91.243
- type: recall_at_3
value: 38.035999999999994
- type: recall_at_5
value: 44.999
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-mathematica
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 15.909999999999998
- type: map_at_10
value: 23.901
- type: map_at_100
value: 25.165
- type: map_at_1000
value: 25.291000000000004
- type: map_at_3
value: 21.356
- type: map_at_5
value: 22.816
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 28.382
- type: mrr_at_100
value: 29.465000000000003
- type: mrr_at_1000
value: 29.535
- type: mrr_at_3
value: 25.933
- type: mrr_at_5
value: 27.332
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 29.099000000000004
- type: ndcg_at_100
value: 35.127
- type: ndcg_at_1000
value: 38.096000000000004
- type: ndcg_at_3
value: 24.464
- type: ndcg_at_5
value: 26.709
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 5.398
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 11.774
- type: precision_at_5
value: 8.632
- type: recall_at_1
value: 15.909999999999998
- type: recall_at_10
value: 40.672000000000004
- type: recall_at_100
value: 66.855
- type: recall_at_1000
value: 87.922
- type: recall_at_3
value: 28.069
- type: recall_at_5
value: 33.812
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-physics
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 30.175
- type: map_at_10
value: 41.36
- type: map_at_100
value: 42.701
- type: map_at_1000
value: 42.817
- type: map_at_3
value: 37.931
- type: map_at_5
value: 39.943
- type: mrr_at_1
value: 35.611
- type: mrr_at_10
value: 46.346
- type: mrr_at_100
value: 47.160000000000004
- type: mrr_at_1000
value: 47.203
- type: mrr_at_3
value: 43.712
- type: mrr_at_5
value: 45.367000000000004
- type: ndcg_at_1
value: 35.611
- type: ndcg_at_10
value: 47.532000000000004
- type: ndcg_at_100
value: 53.003
- type: ndcg_at_1000
value: 55.007
- type: ndcg_at_3
value: 42.043
- type: ndcg_at_5
value: 44.86
- type: precision_at_1
value: 35.611
- type: precision_at_10
value: 8.624
- type: precision_at_100
value: 1.332
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 20.083000000000002
- type: precision_at_5
value: 14.437
- type: recall_at_1
value: 30.175
- type: recall_at_10
value: 60.5
- type: recall_at_100
value: 83.399
- type: recall_at_1000
value: 96.255
- type: recall_at_3
value: 45.448
- type: recall_at_5
value: 52.432
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-programmers
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 22.467000000000002
- type: map_at_10
value: 33.812999999999995
- type: map_at_100
value: 35.248000000000005
- type: map_at_1000
value: 35.359
- type: map_at_3
value: 30.316
- type: map_at_5
value: 32.233000000000004
- type: mrr_at_1
value: 28.310999999999996
- type: mrr_at_10
value: 38.979
- type: mrr_at_100
value: 39.937
- type: mrr_at_1000
value: 39.989999999999995
- type: mrr_at_3
value: 36.244
- type: mrr_at_5
value: 37.871
- type: ndcg_at_1
value: 28.310999999999996
- type: ndcg_at_10
value: 40.282000000000004
- type: ndcg_at_100
value: 46.22
- type: ndcg_at_1000
value: 48.507
- type: ndcg_at_3
value: 34.596
- type: ndcg_at_5
value: 37.267
- type: precision_at_1
value: 28.310999999999996
- type: precision_at_10
value: 7.831
- type: precision_at_100
value: 1.257
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 17.275
- type: precision_at_5
value: 12.556999999999999
- type: recall_at_1
value: 22.467000000000002
- type: recall_at_10
value: 54.14099999999999
- type: recall_at_100
value: 79.593
- type: recall_at_1000
value: 95.063
- type: recall_at_3
value: 38.539
- type: recall_at_5
value: 45.403
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 24.18591666666667
- type: map_at_10
value: 33.84258333333333
- type: map_at_100
value: 35.11391666666666
- type: map_at_1000
value: 35.23258333333333
- type: map_at_3
value: 30.764249999999997
- type: map_at_5
value: 32.52333333333334
- type: mrr_at_1
value: 28.54733333333333
- type: mrr_at_10
value: 37.81725
- type: mrr_at_100
value: 38.716499999999996
- type: mrr_at_1000
value: 38.77458333333333
- type: mrr_at_3
value: 35.157833333333336
- type: mrr_at_5
value: 36.69816666666667
- type: ndcg_at_1
value: 28.54733333333333
- type: ndcg_at_10
value: 39.51508333333334
- type: ndcg_at_100
value: 44.95316666666666
- type: ndcg_at_1000
value: 47.257083333333334
- type: ndcg_at_3
value: 34.205833333333324
- type: ndcg_at_5
value: 36.78266666666667
- type: precision_at_1
value: 28.54733333333333
- type: precision_at_10
value: 7.082583333333334
- type: precision_at_100
value: 1.1590833333333332
- type: precision_at_1000
value: 0.15516666666666662
- type: precision_at_3
value: 15.908750000000001
- type: precision_at_5
value: 11.505416666666669
- type: recall_at_1
value: 24.18591666666667
- type: recall_at_10
value: 52.38758333333333
- type: recall_at_100
value: 76.13666666666667
- type: recall_at_1000
value: 91.99066666666667
- type: recall_at_3
value: 37.78333333333334
- type: recall_at_5
value: 44.30141666666666
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-stats
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 21.975
- type: map_at_10
value: 29.781000000000002
- type: map_at_100
value: 30.847
- type: map_at_1000
value: 30.94
- type: map_at_3
value: 27.167
- type: map_at_5
value: 28.633999999999997
- type: mrr_at_1
value: 24.387
- type: mrr_at_10
value: 32.476
- type: mrr_at_100
value: 33.337
- type: mrr_at_1000
value: 33.403
- type: mrr_at_3
value: 29.881999999999998
- type: mrr_at_5
value: 31.339
- type: ndcg_at_1
value: 24.387
- type: ndcg_at_10
value: 34.596
- type: ndcg_at_100
value: 39.635
- type: ndcg_at_1000
value: 42.079
- type: ndcg_at_3
value: 29.516
- type: ndcg_at_5
value: 31.959
- type: precision_at_1
value: 24.387
- type: precision_at_10
value: 5.6129999999999995
- type: precision_at_100
value: 0.8909999999999999
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.73
- type: precision_at_5
value: 9.171999999999999
- type: recall_at_1
value: 21.975
- type: recall_at_10
value: 46.826
- type: recall_at_100
value: 69.554
- type: recall_at_1000
value: 87.749
- type: recall_at_3
value: 33.016
- type: recall_at_5
value: 38.97
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-tex
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 15.614
- type: map_at_10
value: 22.927
- type: map_at_100
value: 24.185000000000002
- type: map_at_1000
value: 24.319
- type: map_at_3
value: 20.596
- type: map_at_5
value: 21.854000000000003
- type: mrr_at_1
value: 18.858
- type: mrr_at_10
value: 26.535999999999998
- type: mrr_at_100
value: 27.582
- type: mrr_at_1000
value: 27.665
- type: mrr_at_3
value: 24.295
- type: mrr_at_5
value: 25.532
- type: ndcg_at_1
value: 18.858
- type: ndcg_at_10
value: 27.583000000000002
- type: ndcg_at_100
value: 33.635
- type: ndcg_at_1000
value: 36.647
- type: ndcg_at_3
value: 23.348
- type: ndcg_at_5
value: 25.257
- type: precision_at_1
value: 18.858
- type: precision_at_10
value: 5.158
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 11.092
- type: precision_at_5
value: 8.1
- type: recall_at_1
value: 15.614
- type: recall_at_10
value: 37.916
- type: recall_at_100
value: 65.205
- type: recall_at_1000
value: 86.453
- type: recall_at_3
value: 26.137
- type: recall_at_5
value: 31.087999999999997
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-unix
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 23.078000000000003
- type: map_at_10
value: 31.941999999999997
- type: map_at_100
value: 33.196999999999996
- type: map_at_1000
value: 33.303
- type: map_at_3
value: 28.927000000000003
- type: map_at_5
value: 30.707
- type: mrr_at_1
value: 26.866
- type: mrr_at_10
value: 35.557
- type: mrr_at_100
value: 36.569
- type: mrr_at_1000
value: 36.632
- type: mrr_at_3
value: 32.897999999999996
- type: mrr_at_5
value: 34.437
- type: ndcg_at_1
value: 26.866
- type: ndcg_at_10
value: 37.372
- type: ndcg_at_100
value: 43.248
- type: ndcg_at_1000
value: 45.632
- type: ndcg_at_3
value: 31.852999999999998
- type: ndcg_at_5
value: 34.582
- type: precision_at_1
value: 26.866
- type: precision_at_10
value: 6.511
- type: precision_at_100
value: 1.078
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 14.582999999999998
- type: precision_at_5
value: 10.634
- type: recall_at_1
value: 23.078000000000003
- type: recall_at_10
value: 50.334
- type: recall_at_100
value: 75.787
- type: recall_at_1000
value: 92.485
- type: recall_at_3
value: 35.386
- type: recall_at_5
value: 42.225
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-webmasters
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 22.203999999999997
- type: map_at_10
value: 31.276
- type: map_at_100
value: 32.844
- type: map_at_1000
value: 33.062999999999995
- type: map_at_3
value: 27.733999999999998
- type: map_at_5
value: 29.64
- type: mrr_at_1
value: 27.272999999999996
- type: mrr_at_10
value: 36.083
- type: mrr_at_100
value: 37.008
- type: mrr_at_1000
value: 37.076
- type: mrr_at_3
value: 33.004
- type: mrr_at_5
value: 34.664
- type: ndcg_at_1
value: 27.272999999999996
- type: ndcg_at_10
value: 37.763000000000005
- type: ndcg_at_100
value: 43.566
- type: ndcg_at_1000
value: 46.356
- type: ndcg_at_3
value: 31.673000000000002
- type: ndcg_at_5
value: 34.501
- type: precision_at_1
value: 27.272999999999996
- type: precision_at_10
value: 7.470000000000001
- type: precision_at_100
value: 1.502
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 14.756
- type: precision_at_5
value: 11.225
- type: recall_at_1
value: 22.203999999999997
- type: recall_at_10
value: 51.437999999999995
- type: recall_at_100
value: 76.845
- type: recall_at_1000
value: 94.38600000000001
- type: recall_at_3
value: 34.258
- type: recall_at_5
value: 41.512
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack-wordpress
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 17.474
- type: map_at_10
value: 26.362999999999996
- type: map_at_100
value: 27.456999999999997
- type: map_at_1000
value: 27.567999999999998
- type: map_at_3
value: 23.518
- type: map_at_5
value: 25.068
- type: mrr_at_1
value: 18.669
- type: mrr_at_10
value: 27.998
- type: mrr_at_100
value: 28.953
- type: mrr_at_1000
value: 29.03
- type: mrr_at_3
value: 25.230999999999998
- type: mrr_at_5
value: 26.654
- type: ndcg_at_1
value: 18.669
- type: ndcg_at_10
value: 31.684
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.555
- type: ndcg_at_3
value: 26.057000000000002
- type: ndcg_at_5
value: 28.587
- type: precision_at_1
value: 18.669
- type: precision_at_10
value: 5.3420000000000005
- type: precision_at_100
value: 0.847
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 11.583
- type: precision_at_5
value: 8.466
- type: recall_at_1
value: 17.474
- type: recall_at_10
value: 46.497
- type: recall_at_100
value: 69.977
- type: recall_at_1000
value: 89.872
- type: recall_at_3
value: 31.385999999999996
- type: recall_at_5
value: 37.283
- task:
type: Retrieval
dataset:
type: mteb/climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 17.173
- type: map_at_10
value: 30.407
- type: map_at_100
value: 32.528
- type: map_at_1000
value: 32.698
- type: map_at_3
value: 25.523
- type: map_at_5
value: 28.038
- type: mrr_at_1
value: 38.958
- type: mrr_at_10
value: 51.515
- type: mrr_at_100
value: 52.214000000000006
- type: mrr_at_1000
value: 52.237
- type: mrr_at_3
value: 48.502
- type: mrr_at_5
value: 50.251000000000005
- type: ndcg_at_1
value: 38.958
- type: ndcg_at_10
value: 40.355000000000004
- type: ndcg_at_100
value: 47.68
- type: ndcg_at_1000
value: 50.370000000000005
- type: ndcg_at_3
value: 33.946
- type: ndcg_at_5
value: 36.057
- type: precision_at_1
value: 38.958
- type: precision_at_10
value: 12.508
- type: precision_at_100
value: 2.054
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 25.581
- type: precision_at_5
value: 19.256999999999998
- type: recall_at_1
value: 17.173
- type: recall_at_10
value: 46.967
- type: recall_at_100
value: 71.47200000000001
- type: recall_at_1000
value: 86.238
- type: recall_at_3
value: 30.961
- type: recall_at_5
value: 37.539
- task:
type: Retrieval
dataset:
type: mteb/dbpedia
name: MTEB DBPedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 8.999
- type: map_at_10
value: 18.989
- type: map_at_100
value: 26.133
- type: map_at_1000
value: 27.666
- type: map_at_3
value: 13.918
- type: map_at_5
value: 16.473
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.161
- type: mrr_at_100
value: 74.516
- type: mrr_at_1000
value: 74.524
- type: mrr_at_3
value: 72.875
- type: mrr_at_5
value: 73.613
- type: ndcg_at_1
value: 54.37499999999999
- type: ndcg_at_10
value: 39.902
- type: ndcg_at_100
value: 44.212
- type: ndcg_at_1000
value: 51.62
- type: ndcg_at_3
value: 45.193
- type: ndcg_at_5
value: 42.541000000000004
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 30.425
- type: precision_at_100
value: 9.754999999999999
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 48.25
- type: precision_at_5
value: 40.65
- type: recall_at_1
value: 8.999
- type: recall_at_10
value: 24.133
- type: recall_at_100
value: 49.138999999999996
- type: recall_at_1000
value: 72.639
- type: recall_at_3
value: 15.287999999999998
- type: recall_at_5
value: 19.415
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.38999999999999
- type: f1
value: 41.444205512055234
- task:
type: Retrieval
dataset:
type: mteb/fever
name: MTEB FEVER
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.35000000000001
- type: map_at_10
value: 92.837
- type: map_at_100
value: 92.996
- type: map_at_1000
value: 93.006
- type: map_at_3
value: 92.187
- type: map_at_5
value: 92.595
- type: mrr_at_1
value: 93.864
- type: mrr_at_10
value: 96.723
- type: mrr_at_100
value: 96.72500000000001
- type: mrr_at_1000
value: 96.72500000000001
- type: mrr_at_3
value: 96.64
- type: mrr_at_5
value: 96.71499999999999
- type: ndcg_at_1
value: 93.864
- type: ndcg_at_10
value: 94.813
- type: ndcg_at_100
value: 95.243
- type: ndcg_at_1000
value: 95.38600000000001
- type: ndcg_at_3
value: 94.196
- type: ndcg_at_5
value: 94.521
- type: precision_at_1
value: 93.864
- type: precision_at_10
value: 10.951
- type: precision_at_100
value: 1.1400000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 35.114000000000004
- type: precision_at_5
value: 21.476
- type: recall_at_1
value: 87.35000000000001
- type: recall_at_10
value: 96.941
- type: recall_at_100
value: 98.397
- type: recall_at_1000
value: 99.21600000000001
- type: recall_at_3
value: 95.149
- type: recall_at_5
value: 96.131
- task:
type: Retrieval
dataset:
type: mteb/fiqa
name: MTEB FiQA2018
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 24.476
- type: map_at_10
value: 40.11
- type: map_at_100
value: 42.229
- type: map_at_1000
value: 42.378
- type: map_at_3
value: 34.512
- type: map_at_5
value: 38.037
- type: mrr_at_1
value: 47.839999999999996
- type: mrr_at_10
value: 57.053
- type: mrr_at_100
value: 57.772
- type: mrr_at_1000
value: 57.799
- type: mrr_at_3
value: 54.552
- type: mrr_at_5
value: 56.011
- type: ndcg_at_1
value: 47.839999999999996
- type: ndcg_at_10
value: 48.650999999999996
- type: ndcg_at_100
value: 55.681000000000004
- type: ndcg_at_1000
value: 57.979
- type: ndcg_at_3
value: 43.923
- type: ndcg_at_5
value: 46.037
- type: precision_at_1
value: 47.839999999999996
- type: precision_at_10
value: 13.395000000000001
- type: precision_at_100
value: 2.0660000000000003
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 29.064
- type: precision_at_5
value: 22.006
- type: recall_at_1
value: 24.476
- type: recall_at_10
value: 56.216
- type: recall_at_100
value: 81.798
- type: recall_at_1000
value: 95.48299999999999
- type: recall_at_3
value: 39.357
- type: recall_at_5
value: 47.802
- task:
type: Retrieval
dataset:
type: mteb/hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.728
- type: map_at_10
value: 57.737
- type: map_at_100
value: 58.531
- type: map_at_1000
value: 58.594
- type: map_at_3
value: 54.869
- type: map_at_5
value: 56.55
- type: mrr_at_1
value: 85.456
- type: mrr_at_10
value: 90.062
- type: mrr_at_100
value: 90.159
- type: mrr_at_1000
value: 90.16
- type: mrr_at_3
value: 89.37899999999999
- type: mrr_at_5
value: 89.81
- type: ndcg_at_1
value: 85.456
- type: ndcg_at_10
value: 67.755
- type: ndcg_at_100
value: 70.341
- type: ndcg_at_1000
value: 71.538
- type: ndcg_at_3
value: 63.735
- type: ndcg_at_5
value: 65.823
- type: precision_at_1
value: 85.456
- type: precision_at_10
value: 13.450000000000001
- type: precision_at_100
value: 1.545
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 38.861000000000004
- type: precision_at_5
value: 24.964
- type: recall_at_1
value: 42.728
- type: recall_at_10
value: 67.252
- type: recall_at_100
value: 77.265
- type: recall_at_1000
value: 85.246
- type: recall_at_3
value: 58.292
- type: recall_at_5
value: 62.41100000000001
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 87.4836
- type: ap
value: 82.29552224030336
- type: f1
value: 87.42791432227448
- task:
type: Retrieval
dataset:
type: mteb/msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.015
- type: map_at_10
value: 35.621
- type: map_at_100
value: 36.809
- type: map_at_1000
value: 36.853
- type: map_at_3
value: 31.832
- type: map_at_5
value: 34.006
- type: mrr_at_1
value: 23.738999999999997
- type: mrr_at_10
value: 36.309999999999995
- type: mrr_at_100
value: 37.422
- type: mrr_at_1000
value: 37.461
- type: mrr_at_3
value: 32.592999999999996
- type: mrr_at_5
value: 34.736
- type: ndcg_at_1
value: 23.724999999999998
- type: ndcg_at_10
value: 42.617
- type: ndcg_at_100
value: 48.217999999999996
- type: ndcg_at_1000
value: 49.309
- type: ndcg_at_3
value: 34.905
- type: ndcg_at_5
value: 38.769
- type: precision_at_1
value: 23.724999999999998
- type: precision_at_10
value: 6.689
- type: precision_at_100
value: 0.9480000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.89
- type: precision_at_5
value: 10.897
- type: recall_at_1
value: 23.015
- type: recall_at_10
value: 64.041
- type: recall_at_100
value: 89.724
- type: recall_at_1000
value: 98.00999999999999
- type: recall_at_3
value: 43.064
- type: recall_at_5
value: 52.31099999999999
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.49794801641588
- type: f1
value: 96.28931114498003
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.81121751025992
- type: f1
value: 63.18740125901853
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.66644250168123
- type: f1
value: 74.93211186867839
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 81.77202420981843
- type: f1
value: 81.63681969283554
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.596687684870645
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.26965660101405
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.33619694846802
- type: mrr
value: 32.53719657720334
- task:
type: Retrieval
dataset:
type: mteb/nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.0729999999999995
- type: map_at_10
value: 13.245999999999999
- type: map_at_100
value: 16.747999999999998
- type: map_at_1000
value: 18.163
- type: map_at_3
value: 10.064
- type: map_at_5
value: 11.513
- type: mrr_at_1
value: 49.536
- type: mrr_at_10
value: 58.092
- type: mrr_at_100
value: 58.752
- type: mrr_at_1000
value: 58.78
- type: mrr_at_3
value: 56.398
- type: mrr_at_5
value: 57.389
- type: ndcg_at_1
value: 47.059
- type: ndcg_at_10
value: 35.881
- type: ndcg_at_100
value: 32.751999999999995
- type: ndcg_at_1000
value: 41.498000000000005
- type: ndcg_at_3
value: 42.518
- type: ndcg_at_5
value: 39.550999999999995
- type: precision_at_1
value: 49.536
- type: precision_at_10
value: 26.316
- type: precision_at_100
value: 8.084
- type: precision_at_1000
value: 2.081
- type: precision_at_3
value: 39.938
- type: precision_at_5
value: 34.056
- type: recall_at_1
value: 6.0729999999999995
- type: recall_at_10
value: 16.593
- type: recall_at_100
value: 32.883
- type: recall_at_1000
value: 64.654
- type: recall_at_3
value: 11.174000000000001
- type: recall_at_5
value: 13.528
- task:
type: Retrieval
dataset:
type: mteb/nq
name: MTEB NQ
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 30.043
- type: map_at_10
value: 45.318999999999996
- type: map_at_100
value: 46.381
- type: map_at_1000
value: 46.412
- type: map_at_3
value: 40.941
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 33.98
- type: mrr_at_10
value: 47.870000000000005
- type: mrr_at_100
value: 48.681999999999995
- type: mrr_at_1000
value: 48.703
- type: mrr_at_3
value: 44.341
- type: mrr_at_5
value: 46.547
- type: ndcg_at_1
value: 33.98
- type: ndcg_at_10
value: 52.957
- type: ndcg_at_100
value: 57.434
- type: ndcg_at_1000
value: 58.103
- type: ndcg_at_3
value: 44.896
- type: ndcg_at_5
value: 49.353
- type: precision_at_1
value: 33.98
- type: precision_at_10
value: 8.786
- type: precision_at_100
value: 1.1280000000000001
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 20.577
- type: precision_at_5
value: 14.942
- type: recall_at_1
value: 30.043
- type: recall_at_10
value: 73.593
- type: recall_at_100
value: 93.026
- type: recall_at_1000
value: 97.943
- type: recall_at_3
value: 52.955
- type: recall_at_5
value: 63.132
- task:
type: Retrieval
dataset:
type: mteb/quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.808
- type: map_at_10
value: 84.675
- type: map_at_100
value: 85.322
- type: map_at_1000
value: 85.33800000000001
- type: map_at_3
value: 81.68900000000001
- type: map_at_5
value: 83.543
- type: mrr_at_1
value: 81.5
- type: mrr_at_10
value: 87.59700000000001
- type: mrr_at_100
value: 87.705
- type: mrr_at_1000
value: 87.70599999999999
- type: mrr_at_3
value: 86.607
- type: mrr_at_5
value: 87.289
- type: ndcg_at_1
value: 81.51
- type: ndcg_at_10
value: 88.41799999999999
- type: ndcg_at_100
value: 89.644
- type: ndcg_at_1000
value: 89.725
- type: ndcg_at_3
value: 85.49900000000001
- type: ndcg_at_5
value: 87.078
- type: precision_at_1
value: 81.51
- type: precision_at_10
value: 13.438
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.363
- type: precision_at_5
value: 24.57
- type: recall_at_1
value: 70.808
- type: recall_at_10
value: 95.575
- type: recall_at_100
value: 99.667
- type: recall_at_1000
value: 99.98899999999999
- type: recall_at_3
value: 87.223
- type: recall_at_5
value: 91.682
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 58.614831329137715
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 66.86580408560826
- task:
type: Retrieval
dataset:
type: mteb/scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.093
- type: map_at_10
value: 13.014000000000001
- type: map_at_100
value: 15.412999999999998
- type: map_at_1000
value: 15.756999999999998
- type: map_at_3
value: 9.216000000000001
- type: map_at_5
value: 11.036999999999999
- type: mrr_at_1
value: 25.1
- type: mrr_at_10
value: 37.133
- type: mrr_at_100
value: 38.165
- type: mrr_at_1000
value: 38.198
- type: mrr_at_3
value: 33.217
- type: mrr_at_5
value: 35.732
- type: ndcg_at_1
value: 25.1
- type: ndcg_at_10
value: 21.918000000000003
- type: ndcg_at_100
value: 30.983
- type: ndcg_at_1000
value: 36.629
- type: ndcg_at_3
value: 20.544999999999998
- type: ndcg_at_5
value: 18.192
- type: precision_at_1
value: 25.1
- type: precision_at_10
value: 11.44
- type: precision_at_100
value: 2.459
- type: precision_at_1000
value: 0.381
- type: precision_at_3
value: 19.267
- type: precision_at_5
value: 16.16
- type: recall_at_1
value: 5.093
- type: recall_at_10
value: 23.215
- type: recall_at_100
value: 49.902
- type: recall_at_1000
value: 77.403
- type: recall_at_3
value: 11.733
- type: recall_at_5
value: 16.372999999999998
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.9365442977452
- type: cos_sim_spearman
value: 79.36960687383745
- type: euclidean_pearson
value: 79.6045204840714
- type: euclidean_spearman
value: 79.26382712751337
- type: manhattan_pearson
value: 79.4805084789529
- type: manhattan_spearman
value: 79.21847863209523
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.27906192961453
- type: cos_sim_spearman
value: 74.38364712099211
- type: euclidean_pearson
value: 78.54358927241223
- type: euclidean_spearman
value: 74.22185560806376
- type: manhattan_pearson
value: 78.50904327377751
- type: manhattan_spearman
value: 74.2627500781748
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.66863742649639
- type: cos_sim_spearman
value: 84.70630905216271
- type: euclidean_pearson
value: 84.64498334705334
- type: euclidean_spearman
value: 84.87204770690148
- type: manhattan_pearson
value: 84.65774227976077
- type: manhattan_spearman
value: 84.91251851797985
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.1577763924467
- type: cos_sim_spearman
value: 80.10314039230198
- type: euclidean_pearson
value: 81.51346991046043
- type: euclidean_spearman
value: 80.08678485109435
- type: manhattan_pearson
value: 81.57058914661894
- type: manhattan_spearman
value: 80.1516230725106
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.40310839662533
- type: cos_sim_spearman
value: 87.16293477217867
- type: euclidean_pearson
value: 86.50688711184775
- type: euclidean_spearman
value: 87.08651444923031
- type: manhattan_pearson
value: 86.54674677557857
- type: manhattan_spearman
value: 87.15079017870971
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.32886275207817
- type: cos_sim_spearman
value: 85.0190460590732
- type: euclidean_pearson
value: 84.42553652784679
- type: euclidean_spearman
value: 85.20027364279328
- type: manhattan_pearson
value: 84.42926246281078
- type: manhattan_spearman
value: 85.20187419804306
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.76732216967812
- type: cos_sim_spearman
value: 90.63701653633909
- type: euclidean_pearson
value: 90.26678186114682
- type: euclidean_spearman
value: 90.67288073455427
- type: manhattan_pearson
value: 90.20772020584582
- type: manhattan_spearman
value: 90.60764863983702
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.09280387698125
- type: cos_sim_spearman
value: 68.62743151172162
- type: euclidean_pearson
value: 69.89386398104689
- type: euclidean_spearman
value: 68.71191066733556
- type: manhattan_pearson
value: 69.92516500604872
- type: manhattan_spearman
value: 68.80452846992576
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.13178592019887
- type: cos_sim_spearman
value: 86.03947178806887
- type: euclidean_pearson
value: 85.87029414285313
- type: euclidean_spearman
value: 86.04960843306998
- type: manhattan_pearson
value: 85.92946858580146
- type: manhattan_spearman
value: 86.12575341860442
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.16657063002837
- type: mrr
value: 95.73671063867141
- task:
type: Retrieval
dataset:
type: mteb/scifact
name: MTEB SciFact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 63.510999999999996
- type: map_at_10
value: 72.76899999999999
- type: map_at_100
value: 73.303
- type: map_at_1000
value: 73.32499999999999
- type: map_at_3
value: 70.514
- type: map_at_5
value: 71.929
- type: mrr_at_1
value: 66.333
- type: mrr_at_10
value: 73.75
- type: mrr_at_100
value: 74.119
- type: mrr_at_1000
value: 74.138
- type: mrr_at_3
value: 72.222
- type: mrr_at_5
value: 73.122
- type: ndcg_at_1
value: 66.333
- type: ndcg_at_10
value: 76.774
- type: ndcg_at_100
value: 78.78500000000001
- type: ndcg_at_1000
value: 79.254
- type: ndcg_at_3
value: 73.088
- type: ndcg_at_5
value: 75.002
- type: precision_at_1
value: 66.333
- type: precision_at_10
value: 9.833
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.222
- type: precision_at_5
value: 18.333
- type: recall_at_1
value: 63.510999999999996
- type: recall_at_10
value: 87.98899999999999
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 77.86699999999999
- type: recall_at_5
value: 82.73899999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.78514851485149
- type: cos_sim_ap
value: 94.94214383862038
- type: cos_sim_f1
value: 89.02255639097744
- type: cos_sim_precision
value: 89.2462311557789
- type: cos_sim_recall
value: 88.8
- type: dot_accuracy
value: 99.78217821782178
- type: dot_ap
value: 94.69965247836805
- type: dot_f1
value: 88.78695208970439
- type: dot_precision
value: 90.54054054054053
- type: dot_recall
value: 87.1
- type: euclidean_accuracy
value: 99.78118811881188
- type: euclidean_ap
value: 94.9865187695411
- type: euclidean_f1
value: 88.99950223992036
- type: euclidean_precision
value: 88.60257680872151
- type: euclidean_recall
value: 89.4
- type: manhattan_accuracy
value: 99.78811881188119
- type: manhattan_ap
value: 95.0021236766459
- type: manhattan_f1
value: 89.12071535022356
- type: manhattan_precision
value: 88.54886475814413
- type: manhattan_recall
value: 89.7
- type: max_accuracy
value: 99.78811881188119
- type: max_ap
value: 95.0021236766459
- type: max_f1
value: 89.12071535022356
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.93190546593995
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 37.602808534760655
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.29214480978073
- type: mrr
value: 53.123169722434426
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.967800769650022
- type: cos_sim_spearman
value: 31.168490040206926
- type: dot_pearson
value: 30.888603021128553
- type: dot_spearman
value: 31.028241262520385
- task:
type: Retrieval
dataset:
type: mteb/trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22300000000000003
- type: map_at_10
value: 1.781
- type: map_at_100
value: 9.905999999999999
- type: map_at_1000
value: 23.455000000000002
- type: map_at_3
value: 0.569
- type: map_at_5
value: 0.918
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 91.067
- type: mrr_at_100
value: 91.067
- type: mrr_at_1000
value: 91.067
- type: mrr_at_3
value: 90.667
- type: mrr_at_5
value: 91.067
- type: ndcg_at_1
value: 78.0
- type: ndcg_at_10
value: 73.13499999999999
- type: ndcg_at_100
value: 55.32
- type: ndcg_at_1000
value: 49.532
- type: ndcg_at_3
value: 73.715
- type: ndcg_at_5
value: 72.74199999999999
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 78.8
- type: precision_at_100
value: 56.32
- type: precision_at_1000
value: 21.504
- type: precision_at_3
value: 77.333
- type: precision_at_5
value: 78.0
- type: recall_at_1
value: 0.22300000000000003
- type: recall_at_10
value: 2.049
- type: recall_at_100
value: 13.553
- type: recall_at_1000
value: 46.367999999999995
- type: recall_at_3
value: 0.604
- type: recall_at_5
value: 1.015
- task:
type: Retrieval
dataset:
type: mteb/touche2020
name: MTEB Touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.0380000000000003
- type: map_at_10
value: 10.188
- type: map_at_100
value: 16.395
- type: map_at_1000
value: 18.024
- type: map_at_3
value: 6.236
- type: map_at_5
value: 7.276000000000001
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 46.292
- type: mrr_at_100
value: 47.446
- type: mrr_at_1000
value: 47.446
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 44.32
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 25.219
- type: ndcg_at_100
value: 37.802
- type: ndcg_at_1000
value: 49.274
- type: ndcg_at_3
value: 28.605999999999998
- type: ndcg_at_5
value: 26.21
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 21.837
- type: precision_at_100
value: 7.776
- type: precision_at_1000
value: 1.522
- type: precision_at_3
value: 28.571
- type: precision_at_5
value: 25.306
- type: recall_at_1
value: 3.0380000000000003
- type: recall_at_10
value: 16.298000000000002
- type: recall_at_100
value: 48.712
- type: recall_at_1000
value: 83.16799999999999
- type: recall_at_3
value: 7.265000000000001
- type: recall_at_5
value: 9.551
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 83.978
- type: ap
value: 24.751887949330015
- type: f1
value: 66.8685134049279
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.573288058856825
- type: f1
value: 61.973261751726604
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.75483298792469
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.36824223639506
- type: cos_sim_ap
value: 75.53126388573047
- type: cos_sim_f1
value: 67.9912831688245
- type: cos_sim_precision
value: 66.11817501869858
- type: cos_sim_recall
value: 69.9736147757256
- type: dot_accuracy
value: 86.39804494248078
- type: dot_ap
value: 75.27598891718046
- type: dot_f1
value: 67.91146284159763
- type: dot_precision
value: 63.90505003490807
- type: dot_recall
value: 72.45382585751979
- type: euclidean_accuracy
value: 86.36228169517793
- type: euclidean_ap
value: 75.51438087434647
- type: euclidean_f1
value: 68.02370523061066
- type: euclidean_precision
value: 66.46525679758308
- type: euclidean_recall
value: 69.65699208443272
- type: manhattan_accuracy
value: 86.46361089586935
- type: manhattan_ap
value: 75.50800785730111
- type: manhattan_f1
value: 67.9220437187253
- type: manhattan_precision
value: 67.79705573080967
- type: manhattan_recall
value: 68.04749340369392
- type: max_accuracy
value: 86.46361089586935
- type: max_ap
value: 75.53126388573047
- type: max_f1
value: 68.02370523061066
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.80350836341057
- type: cos_sim_ap
value: 85.51101933260743
- type: cos_sim_f1
value: 77.9152271629704
- type: cos_sim_precision
value: 75.27815662910056
- type: cos_sim_recall
value: 80.74376347397599
- type: dot_accuracy
value: 88.84425815966158
- type: dot_ap
value: 85.49726945962519
- type: dot_f1
value: 77.94445269567801
- type: dot_precision
value: 75.27251864601261
- type: dot_recall
value: 80.81305820757623
- type: euclidean_accuracy
value: 88.80350836341057
- type: euclidean_ap
value: 85.4882880790211
- type: euclidean_f1
value: 77.87063284615103
- type: euclidean_precision
value: 74.61022927689595
- type: euclidean_recall
value: 81.42901139513397
- type: manhattan_accuracy
value: 88.7161873714441
- type: manhattan_ap
value: 85.45753871906821
- type: manhattan_f1
value: 77.8686401480111
- type: manhattan_precision
value: 74.95903683123174
- type: manhattan_recall
value: 81.01324299353249
- type: max_accuracy
value: 88.84425815966158
- type: max_ap
value: 85.51101933260743
- type: max_f1
value: 77.94445269567801
---
<!-- **English** | [中文](./README_zh.md) -->
# gte-base-en-v1.5
We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192**, while further enhancing model performance.
The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU).
The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)).
We also present the [`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct),
a SOTA instruction-tuned multi-lingual embedding model that ranked 2nd in MTEB and 1st in C-MTEB.
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Institute for Intelligent Computing, Alibaba Group
- **Model type:** Text Embeddings
- **Paper:** Coming soon.
<!-- - **Demo [optional]:** [More Information Needed] -->
### Model list
| Models | Language | Model Size | Max Seq. Length | Dimension | MTEB-en | LoCo |
|:-----: | :-----: |:-----: |:-----: |:-----: | :-----: | :-----: |
|[`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct)| Multiple | 7720 | 32768 | 4096 | 67.34 | 87.57 |
|[`gte-large-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 434 | 8192 | 1024 | 65.39 | 86.71 |
|[`gte-base-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 137 | 8192 | 768 | 64.11 | 87.44 |
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# Requires transformers>=4.36.0
import torch.nn.functional as F
from transformers import AutoModel, AutoTokenizer
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
model_path = 'Alibaba-NLP/gte-base-en-v1.5'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = outputs.last_hidden_state[:, 0]
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
**It is recommended to install xformers and enable unpadding for acceleration, refer to [enable-unpadding-and-xformers](https://huggingface.co/Alibaba-NLP/new-impl#recommendation-enable-unpadding-and-acceleration-with-xformers).**
Use with `sentence-transformers`:
```python
# Requires sentence_transformers>=2.7.0
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('Alibaba-NLP/gte-base-en-v1.5', trust_remote_code=True)
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
Use with `transformers.js`:
```js
// npm i @xenova/transformers
import { pipeline, dot } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Alibaba-NLP/gte-base-en-v1.5', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const sentences = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => 100 * dot(source_embeddings, x));
console.log(similarities); // [34.504930869007296, 64.03973265120138, 19.520042686034362]
```
## Training Details
### Training Data
- Masked language modeling (MLM): `c4-en`
- Weak-supervised contrastive (WSC) pre-training: [GTE](https://arxiv.org/pdf/2308.03281.pdf) pre-training data
- Supervised contrastive fine-tuning: [GTE](https://arxiv.org/pdf/2308.03281.pdf) fine-tuning data
### Training Procedure
To enable the backbone model to support a context length of 8192, we adopted a multi-stage training strategy.
The model first undergoes preliminary MLM pre-training on shorter lengths.
And then, we resample the data, reducing the proportion of short texts, and continue the MLM pre-training.
The entire training process is as follows:
- MLM-2048: lr 5e-4, mlm_probability 0.3, batch_size 4096, num_steps 70000, rope_base 10000
- MLM-8192: lr 5e-5, mlm_probability 0.3, batch_size 1024, num_steps 20000, rope_base 500000
- WSC: max_len 512, lr 2e-4, batch_size 32768, num_steps 100000
- Fine-tuning: TODO
## Evaluation
### MTEB
The results of other models are retrieved from [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
The gte evaluation setting: `mteb==1.2.0, fp16 auto mix precision, max_length=8192`, and set ntk scaling factor to 2 (equivalent to rope_base * 2).
| Model Name | Param Size (M) | Dimension | Sequence Length | Average (56) | Class. (12) | Clust. (11) | Pair Class. (3) | Reran. (4) | Retr. (15) | STS (10) | Summ. (1) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 434 | 1024 | 8192 | **65.39** | 77.75 | 47.95 | 84.63 | 58.50 | 57.91 | 81.43 | 30.91 |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 335 | 1024 | 512 | 64.68 | 75.64 | 46.71 | 87.2 | 60.11 | 54.39 | 85 | 32.71 |
| [multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) | 560 | 1024 | 514 | 64.41 | 77.56 | 47.1 | 86.19 | 58.58 | 52.47 | 84.78 | 30.39 |
| [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)| 335 | 1024 | 512 | 64.23 | 75.97 | 46.08 | 87.12 | 60.03 | 54.29 | 83.11 | 31.61 |
| [**gte-base-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | 137 | 768 | 8192 | **64.11** | 77.17 | 46.82 | 85.33 | 57.66 | 54.09 | 81.97 | 31.17 |
| [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)| 109 | 768 | 512 | 63.55 | 75.53 | 45.77 | 86.55 | 58.86 | 53.25 | 82.4 | 31.07 |
### LoCo
| Model Name | Dimension | Sequence Length | Average (5) | QsmsumRetrieval | SummScreenRetrieval | QasperAbastractRetrieval | QasperTitleRetrieval | GovReportRetrieval |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [gte-qwen1.5-7b](https://huggingface.co/Alibaba-NLP/gte-qwen1.5-7b) | 4096 | 32768 | 87.57 | 49.37 | 93.10 | 99.67 | 97.54 | 98.21 |
| [gte-large-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-v1.5) |1024 | 8192 | 86.71 | 44.55 | 92.61 | 99.82 | 97.81 | 98.74 |
| [gte-base-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-v1.5) | 768 | 8192 | 87.44 | 49.91 | 91.78 | 99.82 | 97.13 | 98.58 |
## Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
```
|
facebook/opt-350m | facebook | "2023-09-15T13:09:50Z" | 224,502 | 117 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-05-11T08:25:39Z" | ---
language: en
inference: false
tags:
- text-generation
license: other
commercial: false
---
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-350m")
>>> generator("What are we having for dinner?")
[{'generated_text': "What are we having for dinner?\nI'm having a steak and a salad.\nI'm""}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True)
>>> generator("What are we having for dinner?")
[{'generated_text': "What are we having for dinner?\n\nWith spring fast approaching, it’s only appropriate"}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The woman worked as a")
[{'generated_text': "The woman works as a substitute teacher for kids who have missed school. She's the teacher herself,"},
{'generated_text': 'The woman works as a security guard for another company and does an average of around $13/hour'},
{'generated_text': 'The woman works as a receptionist, she could at the least wait a week or two for her'},
{'generated_text': 'The woman works as a manager/intern/career development coach/advisor at a nursing home'},
{'generated_text': 'The woman works as a maid and has to clean the house but you can tell her to do it'}]
```
compared to:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The man worked as a")
[{'generated_text': 'The man works as a security guard for the National Football League franchise. He has been a part of'},
{'generated_text': 'The man works as a security guard for another company and does an excellent job.\nI remember when'},
{'generated_text': 'The man works as a "secret agent" but at the same time he\'s working to protect the'},
{'generated_text': 'The man works as a manager/operator/servant for a grocery store and does a lot of'},
{'generated_text': 'The man works as a bouncer near the scene of the accident - how he could do that is'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
avsolatorio/GIST-all-MiniLM-L6-v2 | avsolatorio | "2024-04-24T23:15:05Z" | 224,411 | 5 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"mteb",
"sentence-similarity",
"en",
"arxiv:2402.16829",
"arxiv:2212.09741",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-02-03T05:28:49Z" | ---
language:
- en
library_name: sentence-transformers
license: mit
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- mteb
- sentence-similarity
- sentence-transformers
model-index:
- name: GIST-all-MiniLM-L6-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.8955223880597
- type: ap
value: 35.447605103320775
- type: f1
value: 66.82951715365854
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 87.19474999999998
- type: ap
value: 83.09577890808514
- type: f1
value: 87.13833121762009
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 42.556000000000004
- type: f1
value: 42.236256693772276
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.884999999999998
- type: map_at_10
value: 42.364000000000004
- type: map_at_100
value: 43.382
- type: map_at_1000
value: 43.391000000000005
- type: map_at_3
value: 37.162
- type: map_at_5
value: 40.139
- type: mrr_at_1
value: 26.884999999999998
- type: mrr_at_10
value: 42.193999999999996
- type: mrr_at_100
value: 43.211
- type: mrr_at_1000
value: 43.221
- type: mrr_at_3
value: 36.949
- type: mrr_at_5
value: 40.004
- type: ndcg_at_1
value: 26.884999999999998
- type: ndcg_at_10
value: 51.254999999999995
- type: ndcg_at_100
value: 55.481
- type: ndcg_at_1000
value: 55.68300000000001
- type: ndcg_at_3
value: 40.565
- type: ndcg_at_5
value: 45.882
- type: precision_at_1
value: 26.884999999999998
- type: precision_at_10
value: 7.9799999999999995
- type: precision_at_100
value: 0.98
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.808999999999997
- type: precision_at_5
value: 12.645999999999999
- type: recall_at_1
value: 26.884999999999998
- type: recall_at_10
value: 79.801
- type: recall_at_100
value: 98.009
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 50.427
- type: recall_at_5
value: 63.229
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.31044837358167
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.44751738734691
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.96517580629869
- type: mrr
value: 76.30051004704744
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.97262600499639
- type: cos_sim_spearman
value: 81.25787561220484
- type: euclidean_pearson
value: 64.96260261677082
- type: euclidean_spearman
value: 64.17616109254686
- type: manhattan_pearson
value: 65.05620628102835
- type: manhattan_spearman
value: 64.71171546419122
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.2435064935065
- type: f1
value: 84.2334859253828
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.38358435972693
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.093619653843124
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.016999999999996
- type: map_at_10
value: 47.019
- type: map_at_100
value: 48.634
- type: map_at_1000
value: 48.757
- type: map_at_3
value: 43.372
- type: map_at_5
value: 45.314
- type: mrr_at_1
value: 43.491
- type: mrr_at_10
value: 53.284
- type: mrr_at_100
value: 54.038
- type: mrr_at_1000
value: 54.071000000000005
- type: mrr_at_3
value: 51.001
- type: mrr_at_5
value: 52.282
- type: ndcg_at_1
value: 43.491
- type: ndcg_at_10
value: 53.498999999999995
- type: ndcg_at_100
value: 58.733999999999995
- type: ndcg_at_1000
value: 60.307
- type: ndcg_at_3
value: 48.841
- type: ndcg_at_5
value: 50.76199999999999
- type: precision_at_1
value: 43.491
- type: precision_at_10
value: 10.315000000000001
- type: precision_at_100
value: 1.6209999999999998
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 23.462
- type: precision_at_5
value: 16.652
- type: recall_at_1
value: 35.016999999999996
- type: recall_at_10
value: 64.92
- type: recall_at_100
value: 86.605
- type: recall_at_1000
value: 96.174
- type: recall_at_3
value: 50.99
- type: recall_at_5
value: 56.93
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.866
- type: map_at_10
value: 40.438
- type: map_at_100
value: 41.77
- type: map_at_1000
value: 41.913
- type: map_at_3
value: 37.634
- type: map_at_5
value: 39.226
- type: mrr_at_1
value: 37.834
- type: mrr_at_10
value: 46.765
- type: mrr_at_100
value: 47.410000000000004
- type: mrr_at_1000
value: 47.461
- type: mrr_at_3
value: 44.735
- type: mrr_at_5
value: 46.028000000000006
- type: ndcg_at_1
value: 37.834
- type: ndcg_at_10
value: 46.303
- type: ndcg_at_100
value: 50.879
- type: ndcg_at_1000
value: 53.112
- type: ndcg_at_3
value: 42.601
- type: ndcg_at_5
value: 44.384
- type: precision_at_1
value: 37.834
- type: precision_at_10
value: 8.898
- type: precision_at_100
value: 1.4409999999999998
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 20.977
- type: precision_at_5
value: 14.841
- type: recall_at_1
value: 29.866
- type: recall_at_10
value: 56.06100000000001
- type: recall_at_100
value: 75.809
- type: recall_at_1000
value: 89.875
- type: recall_at_3
value: 44.707
- type: recall_at_5
value: 49.846000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.985
- type: map_at_10
value: 51.165000000000006
- type: map_at_100
value: 52.17
- type: map_at_1000
value: 52.229000000000006
- type: map_at_3
value: 48.089999999999996
- type: map_at_5
value: 49.762
- type: mrr_at_1
value: 44.577
- type: mrr_at_10
value: 54.493
- type: mrr_at_100
value: 55.137
- type: mrr_at_1000
value: 55.167
- type: mrr_at_3
value: 52.079
- type: mrr_at_5
value: 53.518
- type: ndcg_at_1
value: 44.577
- type: ndcg_at_10
value: 56.825
- type: ndcg_at_100
value: 60.842
- type: ndcg_at_1000
value: 62.015
- type: ndcg_at_3
value: 51.699
- type: ndcg_at_5
value: 54.11
- type: precision_at_1
value: 44.577
- type: precision_at_10
value: 9.11
- type: precision_at_100
value: 1.206
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 23.156
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 38.985
- type: recall_at_10
value: 70.164
- type: recall_at_100
value: 87.708
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 56.285
- type: recall_at_5
value: 62.303
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.137
- type: map_at_10
value: 36.729
- type: map_at_100
value: 37.851
- type: map_at_1000
value: 37.932
- type: map_at_3
value: 34.074
- type: map_at_5
value: 35.398
- type: mrr_at_1
value: 30.621
- type: mrr_at_10
value: 39.007
- type: mrr_at_100
value: 39.961
- type: mrr_at_1000
value: 40.02
- type: mrr_at_3
value: 36.591
- type: mrr_at_5
value: 37.806
- type: ndcg_at_1
value: 30.621
- type: ndcg_at_10
value: 41.772
- type: ndcg_at_100
value: 47.181
- type: ndcg_at_1000
value: 49.053999999999995
- type: ndcg_at_3
value: 36.577
- type: ndcg_at_5
value: 38.777
- type: precision_at_1
value: 30.621
- type: precision_at_10
value: 6.372999999999999
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 15.367
- type: precision_at_5
value: 10.531
- type: recall_at_1
value: 28.137
- type: recall_at_10
value: 55.162
- type: recall_at_100
value: 79.931
- type: recall_at_1000
value: 93.67
- type: recall_at_3
value: 41.057
- type: recall_at_5
value: 46.327
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.798
- type: map_at_10
value: 25.267
- type: map_at_100
value: 26.579000000000004
- type: map_at_1000
value: 26.697
- type: map_at_3
value: 22.456
- type: map_at_5
value: 23.912
- type: mrr_at_1
value: 20.771
- type: mrr_at_10
value: 29.843999999999998
- type: mrr_at_100
value: 30.849
- type: mrr_at_1000
value: 30.916
- type: mrr_at_3
value: 27.156000000000002
- type: mrr_at_5
value: 28.518
- type: ndcg_at_1
value: 20.771
- type: ndcg_at_10
value: 30.792
- type: ndcg_at_100
value: 36.945
- type: ndcg_at_1000
value: 39.619
- type: ndcg_at_3
value: 25.52
- type: ndcg_at_5
value: 27.776
- type: precision_at_1
value: 20.771
- type: precision_at_10
value: 5.734
- type: precision_at_100
value: 1.031
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.148
- type: precision_at_5
value: 9.055
- type: recall_at_1
value: 16.798
- type: recall_at_10
value: 43.332
- type: recall_at_100
value: 70.016
- type: recall_at_1000
value: 88.90400000000001
- type: recall_at_3
value: 28.842000000000002
- type: recall_at_5
value: 34.37
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.180000000000003
- type: map_at_10
value: 41.78
- type: map_at_100
value: 43.102000000000004
- type: map_at_1000
value: 43.222
- type: map_at_3
value: 38.505
- type: map_at_5
value: 40.443
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.481
- type: mrr_at_100
value: 48.268
- type: mrr_at_1000
value: 48.313
- type: mrr_at_3
value: 44.946999999999996
- type: mrr_at_5
value: 46.492
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.827
- type: ndcg_at_100
value: 53.407000000000004
- type: ndcg_at_1000
value: 55.321
- type: ndcg_at_3
value: 42.815
- type: ndcg_at_5
value: 45.363
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.652999999999999
- type: precision_at_100
value: 1.354
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 20.372
- type: precision_at_5
value: 14.591000000000001
- type: recall_at_1
value: 31.180000000000003
- type: recall_at_10
value: 59.894000000000005
- type: recall_at_100
value: 83.722
- type: recall_at_1000
value: 95.705
- type: recall_at_3
value: 45.824
- type: recall_at_5
value: 52.349999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.66
- type: map_at_10
value: 34.141
- type: map_at_100
value: 35.478
- type: map_at_1000
value: 35.594
- type: map_at_3
value: 30.446
- type: map_at_5
value: 32.583
- type: mrr_at_1
value: 29.909000000000002
- type: mrr_at_10
value: 38.949
- type: mrr_at_100
value: 39.803
- type: mrr_at_1000
value: 39.867999999999995
- type: mrr_at_3
value: 35.921
- type: mrr_at_5
value: 37.753
- type: ndcg_at_1
value: 29.909000000000002
- type: ndcg_at_10
value: 40.012
- type: ndcg_at_100
value: 45.707
- type: ndcg_at_1000
value: 48.15
- type: ndcg_at_3
value: 34.015
- type: ndcg_at_5
value: 37.002
- type: precision_at_1
value: 29.909000000000002
- type: precision_at_10
value: 7.693999999999999
- type: precision_at_100
value: 1.2229999999999999
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 16.323999999999998
- type: precision_at_5
value: 12.306000000000001
- type: recall_at_1
value: 24.66
- type: recall_at_10
value: 52.478
- type: recall_at_100
value: 77.051
- type: recall_at_1000
value: 93.872
- type: recall_at_3
value: 36.382999999999996
- type: recall_at_5
value: 43.903999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.768416666666667
- type: map_at_10
value: 36.2485
- type: map_at_100
value: 37.520833333333336
- type: map_at_1000
value: 37.64033333333334
- type: map_at_3
value: 33.25791666666667
- type: map_at_5
value: 34.877250000000004
- type: mrr_at_1
value: 31.65408333333334
- type: mrr_at_10
value: 40.43866666666667
- type: mrr_at_100
value: 41.301249999999996
- type: mrr_at_1000
value: 41.357499999999995
- type: mrr_at_3
value: 37.938916666666664
- type: mrr_at_5
value: 39.35183333333334
- type: ndcg_at_1
value: 31.65408333333334
- type: ndcg_at_10
value: 41.76983333333334
- type: ndcg_at_100
value: 47.138
- type: ndcg_at_1000
value: 49.33816666666667
- type: ndcg_at_3
value: 36.76683333333333
- type: ndcg_at_5
value: 39.04441666666666
- type: precision_at_1
value: 31.65408333333334
- type: precision_at_10
value: 7.396249999999998
- type: precision_at_100
value: 1.1974166666666666
- type: precision_at_1000
value: 0.15791666666666668
- type: precision_at_3
value: 16.955583333333333
- type: precision_at_5
value: 12.09925
- type: recall_at_1
value: 26.768416666666667
- type: recall_at_10
value: 53.82366666666667
- type: recall_at_100
value: 77.39600000000002
- type: recall_at_1000
value: 92.46300000000001
- type: recall_at_3
value: 39.90166666666667
- type: recall_at_5
value: 45.754000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.369
- type: map_at_10
value: 32.025
- type: map_at_100
value: 33.08
- type: map_at_1000
value: 33.169
- type: map_at_3
value: 29.589
- type: map_at_5
value: 30.894
- type: mrr_at_1
value: 27.301
- type: mrr_at_10
value: 34.64
- type: mrr_at_100
value: 35.556
- type: mrr_at_1000
value: 35.616
- type: mrr_at_3
value: 32.515
- type: mrr_at_5
value: 33.666000000000004
- type: ndcg_at_1
value: 27.301
- type: ndcg_at_10
value: 36.386
- type: ndcg_at_100
value: 41.598
- type: ndcg_at_1000
value: 43.864999999999995
- type: ndcg_at_3
value: 32.07
- type: ndcg_at_5
value: 34.028999999999996
- type: precision_at_1
value: 27.301
- type: precision_at_10
value: 5.782
- type: precision_at_100
value: 0.923
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 13.804
- type: precision_at_5
value: 9.693
- type: recall_at_1
value: 24.369
- type: recall_at_10
value: 47.026
- type: recall_at_100
value: 70.76400000000001
- type: recall_at_1000
value: 87.705
- type: recall_at_3
value: 35.366
- type: recall_at_5
value: 40.077
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.878
- type: map_at_10
value: 25.582
- type: map_at_100
value: 26.848
- type: map_at_1000
value: 26.985
- type: map_at_3
value: 22.997
- type: map_at_5
value: 24.487000000000002
- type: mrr_at_1
value: 22.023
- type: mrr_at_10
value: 29.615000000000002
- type: mrr_at_100
value: 30.656
- type: mrr_at_1000
value: 30.737
- type: mrr_at_3
value: 27.322999999999997
- type: mrr_at_5
value: 28.665000000000003
- type: ndcg_at_1
value: 22.023
- type: ndcg_at_10
value: 30.476999999999997
- type: ndcg_at_100
value: 36.258
- type: ndcg_at_1000
value: 39.287
- type: ndcg_at_3
value: 25.995
- type: ndcg_at_5
value: 28.174
- type: precision_at_1
value: 22.023
- type: precision_at_10
value: 5.657
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 12.491
- type: precision_at_5
value: 9.112
- type: recall_at_1
value: 17.878
- type: recall_at_10
value: 41.155
- type: recall_at_100
value: 66.62599999999999
- type: recall_at_1000
value: 88.08200000000001
- type: recall_at_3
value: 28.505000000000003
- type: recall_at_5
value: 34.284
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.369999999999997
- type: map_at_10
value: 36.115
- type: map_at_100
value: 37.346000000000004
- type: map_at_1000
value: 37.449
- type: map_at_3
value: 32.976
- type: map_at_5
value: 34.782000000000004
- type: mrr_at_1
value: 30.784
- type: mrr_at_10
value: 40.014
- type: mrr_at_100
value: 40.913
- type: mrr_at_1000
value: 40.967999999999996
- type: mrr_at_3
value: 37.205
- type: mrr_at_5
value: 38.995999999999995
- type: ndcg_at_1
value: 30.784
- type: ndcg_at_10
value: 41.797000000000004
- type: ndcg_at_100
value: 47.355000000000004
- type: ndcg_at_1000
value: 49.535000000000004
- type: ndcg_at_3
value: 36.29
- type: ndcg_at_5
value: 39.051
- type: precision_at_1
value: 30.784
- type: precision_at_10
value: 7.164
- type: precision_at_100
value: 1.122
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 16.636
- type: precision_at_5
value: 11.996
- type: recall_at_1
value: 26.369999999999997
- type: recall_at_10
value: 55.010000000000005
- type: recall_at_100
value: 79.105
- type: recall_at_1000
value: 94.053
- type: recall_at_3
value: 40.139
- type: recall_at_5
value: 47.089
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.421
- type: map_at_10
value: 35.253
- type: map_at_100
value: 36.97
- type: map_at_1000
value: 37.195
- type: map_at_3
value: 32.068000000000005
- type: map_at_5
value: 33.763
- type: mrr_at_1
value: 31.423000000000002
- type: mrr_at_10
value: 39.995999999999995
- type: mrr_at_100
value: 40.977999999999994
- type: mrr_at_1000
value: 41.024
- type: mrr_at_3
value: 36.989
- type: mrr_at_5
value: 38.629999999999995
- type: ndcg_at_1
value: 31.423000000000002
- type: ndcg_at_10
value: 41.382000000000005
- type: ndcg_at_100
value: 47.532000000000004
- type: ndcg_at_1000
value: 49.829
- type: ndcg_at_3
value: 35.809000000000005
- type: ndcg_at_5
value: 38.308
- type: precision_at_1
value: 31.423000000000002
- type: precision_at_10
value: 7.885000000000001
- type: precision_at_100
value: 1.609
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 16.469
- type: precision_at_5
value: 12.174
- type: recall_at_1
value: 26.421
- type: recall_at_10
value: 53.618
- type: recall_at_100
value: 80.456
- type: recall_at_1000
value: 94.505
- type: recall_at_3
value: 37.894
- type: recall_at_5
value: 44.352999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.54
- type: map_at_10
value: 29.468
- type: map_at_100
value: 30.422
- type: map_at_1000
value: 30.542
- type: map_at_3
value: 26.888
- type: map_at_5
value: 27.962999999999997
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 31.176
- type: mrr_at_100
value: 32.046
- type: mrr_at_1000
value: 32.129000000000005
- type: mrr_at_3
value: 28.804999999999996
- type: mrr_at_5
value: 29.868
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 34.166000000000004
- type: ndcg_at_100
value: 39.217999999999996
- type: ndcg_at_1000
value: 41.964
- type: ndcg_at_3
value: 28.970000000000002
- type: ndcg_at_5
value: 30.797
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 5.489999999999999
- type: precision_at_100
value: 0.874
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.261
- type: precision_at_5
value: 8.503
- type: recall_at_1
value: 21.54
- type: recall_at_10
value: 47.064
- type: recall_at_100
value: 70.959
- type: recall_at_1000
value: 91.032
- type: recall_at_3
value: 32.828
- type: recall_at_5
value: 37.214999999999996
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.102
- type: map_at_10
value: 17.469
- type: map_at_100
value: 19.244
- type: map_at_1000
value: 19.435
- type: map_at_3
value: 14.257
- type: map_at_5
value: 16.028000000000002
- type: mrr_at_1
value: 22.866
- type: mrr_at_10
value: 33.535
- type: mrr_at_100
value: 34.583999999999996
- type: mrr_at_1000
value: 34.622
- type: mrr_at_3
value: 29.946
- type: mrr_at_5
value: 32.157000000000004
- type: ndcg_at_1
value: 22.866
- type: ndcg_at_10
value: 25.16
- type: ndcg_at_100
value: 32.347
- type: ndcg_at_1000
value: 35.821
- type: ndcg_at_3
value: 19.816
- type: ndcg_at_5
value: 22.026
- type: precision_at_1
value: 22.866
- type: precision_at_10
value: 8.072
- type: precision_at_100
value: 1.5709999999999997
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 14.701
- type: precision_at_5
value: 11.960999999999999
- type: recall_at_1
value: 10.102
- type: recall_at_10
value: 31.086000000000002
- type: recall_at_100
value: 55.896
- type: recall_at_1000
value: 75.375
- type: recall_at_3
value: 18.343999999999998
- type: recall_at_5
value: 24.102
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.961
- type: map_at_10
value: 16.058
- type: map_at_100
value: 21.878
- type: map_at_1000
value: 23.156
- type: map_at_3
value: 12.206999999999999
- type: map_at_5
value: 13.747000000000002
- type: mrr_at_1
value: 60.5
- type: mrr_at_10
value: 68.488
- type: mrr_at_100
value: 69.02199999999999
- type: mrr_at_1000
value: 69.03200000000001
- type: mrr_at_3
value: 66.792
- type: mrr_at_5
value: 67.62899999999999
- type: ndcg_at_1
value: 49.125
- type: ndcg_at_10
value: 34.827999999999996
- type: ndcg_at_100
value: 38.723
- type: ndcg_at_1000
value: 45.988
- type: ndcg_at_3
value: 40.302
- type: ndcg_at_5
value: 36.781000000000006
- type: precision_at_1
value: 60.5
- type: precision_at_10
value: 26.825
- type: precision_at_100
value: 8.445
- type: precision_at_1000
value: 1.7000000000000002
- type: precision_at_3
value: 43.25
- type: precision_at_5
value: 34.5
- type: recall_at_1
value: 7.961
- type: recall_at_10
value: 20.843
- type: recall_at_100
value: 43.839
- type: recall_at_1000
value: 67.33
- type: recall_at_3
value: 13.516
- type: recall_at_5
value: 15.956000000000001
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.06000000000001
- type: f1
value: 47.21494728335567
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.798
- type: map_at_10
value: 67.644
- type: map_at_100
value: 68.01700000000001
- type: map_at_1000
value: 68.038
- type: map_at_3
value: 65.539
- type: map_at_5
value: 66.912
- type: mrr_at_1
value: 61.221000000000004
- type: mrr_at_10
value: 71.97099999999999
- type: mrr_at_100
value: 72.262
- type: mrr_at_1000
value: 72.27
- type: mrr_at_3
value: 70.052
- type: mrr_at_5
value: 71.324
- type: ndcg_at_1
value: 61.221000000000004
- type: ndcg_at_10
value: 73.173
- type: ndcg_at_100
value: 74.779
- type: ndcg_at_1000
value: 75.229
- type: ndcg_at_3
value: 69.291
- type: ndcg_at_5
value: 71.552
- type: precision_at_1
value: 61.221000000000004
- type: precision_at_10
value: 9.449
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 27.467999999999996
- type: precision_at_5
value: 17.744
- type: recall_at_1
value: 56.798
- type: recall_at_10
value: 85.991
- type: recall_at_100
value: 92.973
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 75.576
- type: recall_at_5
value: 81.12
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.323
- type: map_at_10
value: 30.279
- type: map_at_100
value: 32.153999999999996
- type: map_at_1000
value: 32.339
- type: map_at_3
value: 26.336
- type: map_at_5
value: 28.311999999999998
- type: mrr_at_1
value: 35.339999999999996
- type: mrr_at_10
value: 44.931
- type: mrr_at_100
value: 45.818999999999996
- type: mrr_at_1000
value: 45.864
- type: mrr_at_3
value: 42.618
- type: mrr_at_5
value: 43.736999999999995
- type: ndcg_at_1
value: 35.339999999999996
- type: ndcg_at_10
value: 37.852999999999994
- type: ndcg_at_100
value: 44.888
- type: ndcg_at_1000
value: 48.069
- type: ndcg_at_3
value: 34.127
- type: ndcg_at_5
value: 35.026
- type: precision_at_1
value: 35.339999999999996
- type: precision_at_10
value: 10.617
- type: precision_at_100
value: 1.7930000000000001
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 22.582
- type: precision_at_5
value: 16.605
- type: recall_at_1
value: 18.323
- type: recall_at_10
value: 44.948
- type: recall_at_100
value: 71.11800000000001
- type: recall_at_1000
value: 90.104
- type: recall_at_3
value: 31.661
- type: recall_at_5
value: 36.498000000000005
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.668
- type: map_at_10
value: 43.669999999999995
- type: map_at_100
value: 44.646
- type: map_at_1000
value: 44.731
- type: map_at_3
value: 40.897
- type: map_at_5
value: 42.559999999999995
- type: mrr_at_1
value: 61.336999999999996
- type: mrr_at_10
value: 68.496
- type: mrr_at_100
value: 68.916
- type: mrr_at_1000
value: 68.938
- type: mrr_at_3
value: 66.90700000000001
- type: mrr_at_5
value: 67.91199999999999
- type: ndcg_at_1
value: 61.336999999999996
- type: ndcg_at_10
value: 52.588
- type: ndcg_at_100
value: 56.389
- type: ndcg_at_1000
value: 58.187999999999995
- type: ndcg_at_3
value: 48.109
- type: ndcg_at_5
value: 50.498
- type: precision_at_1
value: 61.336999999999996
- type: precision_at_10
value: 11.033
- type: precision_at_100
value: 1.403
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 30.105999999999998
- type: precision_at_5
value: 19.954
- type: recall_at_1
value: 30.668
- type: recall_at_10
value: 55.165
- type: recall_at_100
value: 70.169
- type: recall_at_1000
value: 82.12
- type: recall_at_3
value: 45.159
- type: recall_at_5
value: 49.885000000000005
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 78.542
- type: ap
value: 72.50692137216646
- type: f1
value: 78.40630687221642
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 18.613
- type: map_at_10
value: 29.98
- type: map_at_100
value: 31.136999999999997
- type: map_at_1000
value: 31.196
- type: map_at_3
value: 26.339000000000002
- type: map_at_5
value: 28.351
- type: mrr_at_1
value: 19.054
- type: mrr_at_10
value: 30.476
- type: mrr_at_100
value: 31.588
- type: mrr_at_1000
value: 31.641000000000002
- type: mrr_at_3
value: 26.834000000000003
- type: mrr_at_5
value: 28.849000000000004
- type: ndcg_at_1
value: 19.083
- type: ndcg_at_10
value: 36.541000000000004
- type: ndcg_at_100
value: 42.35
- type: ndcg_at_1000
value: 43.9
- type: ndcg_at_3
value: 29.015
- type: ndcg_at_5
value: 32.622
- type: precision_at_1
value: 19.083
- type: precision_at_10
value: 5.914
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 12.483
- type: precision_at_5
value: 9.315
- type: recall_at_1
value: 18.613
- type: recall_at_10
value: 56.88999999999999
- type: recall_at_100
value: 84.207
- type: recall_at_1000
value: 96.20100000000001
- type: recall_at_3
value: 36.262
- type: recall_at_5
value: 44.925
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.77656178750571
- type: f1
value: 94.37966073742972
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.72457820337438
- type: f1
value: 59.11327646329634
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.17753866846
- type: f1
value: 71.22604635414544
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.67787491593813
- type: f1
value: 76.87653151298177
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.3485843514749
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.792796913883617
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.310305659169963
- type: mrr
value: 32.38286775798406
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.968
- type: map_at_10
value: 11.379
- type: map_at_100
value: 14.618999999999998
- type: map_at_1000
value: 16.055
- type: map_at_3
value: 8.34
- type: map_at_5
value: 9.690999999999999
- type: mrr_at_1
value: 43.034
- type: mrr_at_10
value: 51.019999999999996
- type: mrr_at_100
value: 51.63100000000001
- type: mrr_at_1000
value: 51.681
- type: mrr_at_3
value: 49.174
- type: mrr_at_5
value: 50.181
- type: ndcg_at_1
value: 41.176
- type: ndcg_at_10
value: 31.341
- type: ndcg_at_100
value: 29.451
- type: ndcg_at_1000
value: 38.007000000000005
- type: ndcg_at_3
value: 36.494
- type: ndcg_at_5
value: 34.499
- type: precision_at_1
value: 43.034
- type: precision_at_10
value: 23.375
- type: precision_at_100
value: 7.799
- type: precision_at_1000
value: 2.059
- type: precision_at_3
value: 34.675
- type: precision_at_5
value: 30.154999999999998
- type: recall_at_1
value: 4.968
- type: recall_at_10
value: 15.104999999999999
- type: recall_at_100
value: 30.741000000000003
- type: recall_at_1000
value: 61.182
- type: recall_at_3
value: 9.338000000000001
- type: recall_at_5
value: 11.484
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.716
- type: map_at_10
value: 38.32
- type: map_at_100
value: 39.565
- type: map_at_1000
value: 39.602
- type: map_at_3
value: 33.848
- type: map_at_5
value: 36.471
- type: mrr_at_1
value: 26.912000000000003
- type: mrr_at_10
value: 40.607
- type: mrr_at_100
value: 41.589
- type: mrr_at_1000
value: 41.614000000000004
- type: mrr_at_3
value: 36.684
- type: mrr_at_5
value: 39.036
- type: ndcg_at_1
value: 26.883000000000003
- type: ndcg_at_10
value: 46.096
- type: ndcg_at_100
value: 51.513
- type: ndcg_at_1000
value: 52.366
- type: ndcg_at_3
value: 37.549
- type: ndcg_at_5
value: 41.971000000000004
- type: precision_at_1
value: 26.883000000000003
- type: precision_at_10
value: 8.004
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 17.516000000000002
- type: precision_at_5
value: 13.019
- type: recall_at_1
value: 23.716
- type: recall_at_10
value: 67.656
- type: recall_at_100
value: 91.413
- type: recall_at_1000
value: 97.714
- type: recall_at_3
value: 45.449
- type: recall_at_5
value: 55.598000000000006
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.486
- type: map_at_10
value: 84.292
- type: map_at_100
value: 84.954
- type: map_at_1000
value: 84.969
- type: map_at_3
value: 81.295
- type: map_at_5
value: 83.165
- type: mrr_at_1
value: 81.16
- type: mrr_at_10
value: 87.31
- type: mrr_at_100
value: 87.423
- type: mrr_at_1000
value: 87.423
- type: mrr_at_3
value: 86.348
- type: mrr_at_5
value: 86.991
- type: ndcg_at_1
value: 81.17
- type: ndcg_at_10
value: 88.067
- type: ndcg_at_100
value: 89.34
- type: ndcg_at_1000
value: 89.43900000000001
- type: ndcg_at_3
value: 85.162
- type: ndcg_at_5
value: 86.752
- type: precision_at_1
value: 81.17
- type: precision_at_10
value: 13.394
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.193
- type: precision_at_5
value: 24.482
- type: recall_at_1
value: 70.486
- type: recall_at_10
value: 95.184
- type: recall_at_100
value: 99.53999999999999
- type: recall_at_1000
value: 99.98700000000001
- type: recall_at_3
value: 86.89
- type: recall_at_5
value: 91.365
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 44.118229475102154
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 48.68049097629063
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.888
- type: map_at_10
value: 12.770999999999999
- type: map_at_100
value: 15.238
- type: map_at_1000
value: 15.616
- type: map_at_3
value: 8.952
- type: map_at_5
value: 10.639999999999999
- type: mrr_at_1
value: 24.099999999999998
- type: mrr_at_10
value: 35.375
- type: mrr_at_100
value: 36.442
- type: mrr_at_1000
value: 36.488
- type: mrr_at_3
value: 31.717000000000002
- type: mrr_at_5
value: 33.722
- type: ndcg_at_1
value: 24.099999999999998
- type: ndcg_at_10
value: 21.438
- type: ndcg_at_100
value: 30.601
- type: ndcg_at_1000
value: 36.678
- type: ndcg_at_3
value: 19.861
- type: ndcg_at_5
value: 17.263
- type: precision_at_1
value: 24.099999999999998
- type: precision_at_10
value: 11.4
- type: precision_at_100
value: 2.465
- type: precision_at_1000
value: 0.392
- type: precision_at_3
value: 18.733
- type: precision_at_5
value: 15.22
- type: recall_at_1
value: 4.888
- type: recall_at_10
value: 23.118
- type: recall_at_100
value: 49.995
- type: recall_at_1000
value: 79.577
- type: recall_at_3
value: 11.398
- type: recall_at_5
value: 15.428
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.33198632617024
- type: cos_sim_spearman
value: 79.09232997136625
- type: euclidean_pearson
value: 81.49986011523868
- type: euclidean_spearman
value: 77.03530620283338
- type: manhattan_pearson
value: 81.4741227286667
- type: manhattan_spearman
value: 76.98641133116311
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.60103674582464
- type: cos_sim_spearman
value: 75.03945035801914
- type: euclidean_pearson
value: 80.82455267481467
- type: euclidean_spearman
value: 70.3317366248871
- type: manhattan_pearson
value: 80.8928091531445
- type: manhattan_spearman
value: 70.43207370945672
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.52453177109315
- type: cos_sim_spearman
value: 83.26431569305103
- type: euclidean_pearson
value: 82.10494657997404
- type: euclidean_spearman
value: 83.41028425949024
- type: manhattan_pearson
value: 82.08669822983934
- type: manhattan_spearman
value: 83.39959776442115
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.67472020277681
- type: cos_sim_spearman
value: 78.61877889763109
- type: euclidean_pearson
value: 80.07878012437722
- type: euclidean_spearman
value: 77.44374494215397
- type: manhattan_pearson
value: 79.95988483102258
- type: manhattan_spearman
value: 77.36018101061366
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.55450610494437
- type: cos_sim_spearman
value: 87.03494331841401
- type: euclidean_pearson
value: 81.4319784394287
- type: euclidean_spearman
value: 82.47893040599372
- type: manhattan_pearson
value: 81.32627203699644
- type: manhattan_spearman
value: 82.40660565070675
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.51576965454805
- type: cos_sim_spearman
value: 83.0062959588245
- type: euclidean_pearson
value: 79.98888882568556
- type: euclidean_spearman
value: 81.08948911791873
- type: manhattan_pearson
value: 79.77952719568583
- type: manhattan_spearman
value: 80.79471040445408
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.28313046682885
- type: cos_sim_spearman
value: 87.35865211085007
- type: euclidean_pearson
value: 84.11501613667811
- type: euclidean_spearman
value: 82.82038954956121
- type: manhattan_pearson
value: 83.891278147302
- type: manhattan_spearman
value: 82.59947685165902
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.80653738006102
- type: cos_sim_spearman
value: 68.11259151179601
- type: euclidean_pearson
value: 43.16707985094242
- type: euclidean_spearman
value: 58.96200382968696
- type: manhattan_pearson
value: 43.84146858566507
- type: manhattan_spearman
value: 59.05193977207514
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.62068205073571
- type: cos_sim_spearman
value: 84.40071593577095
- type: euclidean_pearson
value: 80.90824726252514
- type: euclidean_spearman
value: 80.54974812534094
- type: manhattan_pearson
value: 80.6759008187939
- type: manhattan_spearman
value: 80.31149103896973
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.13774787530915
- type: mrr
value: 96.22233793802422
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 49.167
- type: map_at_10
value: 59.852000000000004
- type: map_at_100
value: 60.544
- type: map_at_1000
value: 60.577000000000005
- type: map_at_3
value: 57.242000000000004
- type: map_at_5
value: 58.704
- type: mrr_at_1
value: 51.0
- type: mrr_at_10
value: 60.575
- type: mrr_at_100
value: 61.144
- type: mrr_at_1000
value: 61.175000000000004
- type: mrr_at_3
value: 58.667
- type: mrr_at_5
value: 59.599999999999994
- type: ndcg_at_1
value: 51.0
- type: ndcg_at_10
value: 64.398
- type: ndcg_at_100
value: 67.581
- type: ndcg_at_1000
value: 68.551
- type: ndcg_at_3
value: 59.928000000000004
- type: ndcg_at_5
value: 61.986
- type: precision_at_1
value: 51.0
- type: precision_at_10
value: 8.7
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 23.666999999999998
- type: precision_at_5
value: 15.6
- type: recall_at_1
value: 49.167
- type: recall_at_10
value: 77.333
- type: recall_at_100
value: 91.833
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 65.594
- type: recall_at_5
value: 70.52199999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.77227722772277
- type: cos_sim_ap
value: 94.14261011689366
- type: cos_sim_f1
value: 88.37209302325581
- type: cos_sim_precision
value: 89.36605316973414
- type: cos_sim_recall
value: 87.4
- type: dot_accuracy
value: 99.07128712871287
- type: dot_ap
value: 27.325649239129486
- type: dot_f1
value: 33.295838020247466
- type: dot_precision
value: 38.04627249357326
- type: dot_recall
value: 29.599999999999998
- type: euclidean_accuracy
value: 99.74158415841585
- type: euclidean_ap
value: 92.32695359979576
- type: euclidean_f1
value: 86.90534575772439
- type: euclidean_precision
value: 85.27430221366699
- type: euclidean_recall
value: 88.6
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 92.40335687760499
- type: manhattan_f1
value: 86.96507624200687
- type: manhattan_precision
value: 85.57599225556632
- type: manhattan_recall
value: 88.4
- type: max_accuracy
value: 99.77227722772277
- type: max_ap
value: 94.14261011689366
- type: max_f1
value: 88.37209302325581
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.113809982945035
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.90915908471812
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.36481271702464
- type: mrr
value: 51.05628236142942
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.311305530381826
- type: cos_sim_spearman
value: 31.22029657606254
- type: dot_pearson
value: 12.157032445910177
- type: dot_spearman
value: 13.275185888551805
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.167
- type: map_at_10
value: 1.113
- type: map_at_100
value: 5.926
- type: map_at_1000
value: 15.25
- type: map_at_3
value: 0.414
- type: map_at_5
value: 0.633
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 74.444
- type: mrr_at_100
value: 74.667
- type: mrr_at_1000
value: 74.679
- type: mrr_at_3
value: 72.0
- type: mrr_at_5
value: 74.0
- type: ndcg_at_1
value: 59.0
- type: ndcg_at_10
value: 51.468
- type: ndcg_at_100
value: 38.135000000000005
- type: ndcg_at_1000
value: 36.946
- type: ndcg_at_3
value: 55.827000000000005
- type: ndcg_at_5
value: 53.555
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 54.400000000000006
- type: precision_at_100
value: 39.08
- type: precision_at_1000
value: 16.618
- type: precision_at_3
value: 58.667
- type: precision_at_5
value: 56.8
- type: recall_at_1
value: 0.167
- type: recall_at_10
value: 1.38
- type: recall_at_100
value: 9.189
- type: recall_at_1000
value: 35.737
- type: recall_at_3
value: 0.455
- type: recall_at_5
value: 0.73
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.4299999999999997
- type: map_at_10
value: 8.539
- type: map_at_100
value: 14.155999999999999
- type: map_at_1000
value: 15.684999999999999
- type: map_at_3
value: 3.857
- type: map_at_5
value: 5.583
- type: mrr_at_1
value: 26.531
- type: mrr_at_10
value: 40.489999999999995
- type: mrr_at_100
value: 41.772999999999996
- type: mrr_at_1000
value: 41.772999999999996
- type: mrr_at_3
value: 35.034
- type: mrr_at_5
value: 38.81
- type: ndcg_at_1
value: 21.429000000000002
- type: ndcg_at_10
value: 20.787
- type: ndcg_at_100
value: 33.202
- type: ndcg_at_1000
value: 45.167
- type: ndcg_at_3
value: 18.233
- type: ndcg_at_5
value: 19.887
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.4079999999999995
- type: precision_at_1000
value: 1.5310000000000001
- type: precision_at_3
value: 19.728
- type: precision_at_5
value: 21.633
- type: recall_at_1
value: 2.4299999999999997
- type: recall_at_10
value: 14.901
- type: recall_at_100
value: 46.422000000000004
- type: recall_at_1000
value: 82.83500000000001
- type: recall_at_3
value: 4.655
- type: recall_at_5
value: 8.092
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.90140000000001
- type: ap
value: 15.138716624430662
- type: f1
value: 56.08803013269606
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.85285795132994
- type: f1
value: 60.17575819903709
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 41.125150148437065
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.96751505036657
- type: cos_sim_ap
value: 70.45642872444971
- type: cos_sim_f1
value: 65.75274793133259
- type: cos_sim_precision
value: 61.806361736707686
- type: cos_sim_recall
value: 70.23746701846966
- type: dot_accuracy
value: 77.84466829588126
- type: dot_ap
value: 32.49904328313596
- type: dot_f1
value: 37.903122189387126
- type: dot_precision
value: 25.050951086956523
- type: dot_recall
value: 77.83641160949868
- type: euclidean_accuracy
value: 84.5920009536866
- type: euclidean_ap
value: 68.83700633574043
- type: euclidean_f1
value: 64.92803542871202
- type: euclidean_precision
value: 60.820465545056464
- type: euclidean_recall
value: 69.63060686015831
- type: manhattan_accuracy
value: 84.52643500029802
- type: manhattan_ap
value: 68.63286046599892
- type: manhattan_f1
value: 64.7476540705047
- type: manhattan_precision
value: 62.3291015625
- type: manhattan_recall
value: 67.36147757255937
- type: max_accuracy
value: 84.96751505036657
- type: max_ap
value: 70.45642872444971
- type: max_f1
value: 65.75274793133259
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.65603291031164
- type: cos_sim_ap
value: 85.58148320880878
- type: cos_sim_f1
value: 77.63202920041064
- type: cos_sim_precision
value: 76.68444377675957
- type: cos_sim_recall
value: 78.60332614721281
- type: dot_accuracy
value: 79.71048239996895
- type: dot_ap
value: 59.31114839296281
- type: dot_f1
value: 57.13895527483783
- type: dot_precision
value: 51.331125015335545
- type: dot_recall
value: 64.4287034185402
- type: euclidean_accuracy
value: 86.99305312997244
- type: euclidean_ap
value: 81.87075965254876
- type: euclidean_f1
value: 73.53543008715421
- type: euclidean_precision
value: 72.39964184450082
- type: euclidean_recall
value: 74.70742223591007
- type: manhattan_accuracy
value: 87.04156479217605
- type: manhattan_ap
value: 81.7850497283247
- type: manhattan_f1
value: 73.52951955143475
- type: manhattan_precision
value: 70.15875236030492
- type: manhattan_recall
value: 77.2405297197413
- type: max_accuracy
value: 88.65603291031164
- type: max_ap
value: 85.58148320880878
- type: max_f1
value: 77.63202920041064
---
<h1 align="center">GIST Embedding v0 - all-MiniLM-L6-v2</h1>
*GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning*
The model is fine-tuned on top of the [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) using the [MEDI dataset](https://github.com/xlang-ai/instructor-embedding.git) augmented with mined triplets from the [MTEB Classification](https://huggingface.co/mteb) training dataset (excluding data from the Amazon Polarity Classification task).
The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions.
Technical paper: [GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning](https://arxiv.org/abs/2402.16829)
# Data
The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:
- Dataset: [avsolatorio/medi-data-mteb_avs_triplets](https://huggingface.co/datasets/avsolatorio/medi-data-mteb_avs_triplets)
- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb
The dataset contains a `task_type` key, which can be used to select only the mteb classification tasks (prefixed with `mteb_`).
The **MEDI Dataset** is published in the following paper: [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741).
The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.
The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.
# Usage
The model can be easily loaded using the Sentence Transformers library.
```Python
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
revision = None # Replace with the specific revision to ensure reproducibility if the model is updated.
model = SentenceTransformer("avsolatorio/GIST-all-MiniLM-L6-v2", revision=revision)
texts = [
"Illustration of the REaLTabFormer model. The left block shows the non-relational tabular data model using GPT-2 with a causal LM head. In contrast, the right block shows how a relational dataset's child table is modeled using a sequence-to-sequence (Seq2Seq) model. The Seq2Seq model uses the observations in the parent table to condition the generation of the observations in the child table. The trained GPT-2 model on the parent table, with weights frozen, is also used as the encoder in the Seq2Seq model.",
"Predicting human mobility holds significant practical value, with applications ranging from enhancing disaster risk planning to simulating epidemic spread. In this paper, we present the GeoFormer, a decoder-only transformer model adapted from the GPT architecture to forecast human mobility.",
"As the economies of Southeast Asia continue adopting digital technologies, policy makers increasingly ask how to prepare the workforce for emerging labor demands. However, little is known about the skills that workers need to adapt to these changes"
]
# Compute embeddings
embeddings = model.encode(texts, convert_to_tensor=True)
# Compute cosine-similarity for each pair of sentences
scores = F.cosine_similarity(embeddings.unsqueeze(1), embeddings.unsqueeze(0), dim=-1)
print(scores.cpu().numpy())
```
# Training Parameters
Below are the training parameters used to fine-tune the model:
```
Epochs = 40
Warmup ratio = 0.1
Learning rate = 5e-6
Batch size = 16
Checkpoint step = 102000
Contrastive loss temperature = 0.01
```
# Evaluation
The model was evaluated using the [MTEB Evaluation](https://huggingface.co/mteb) suite.
# Citation
Please cite our work if you use GISTEmbed or the datasets we published in your projects or research. 🤗
```
@article{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
journal={arXiv preprint arXiv:2402.16829},
year={2024},
URL={https://arxiv.org/abs/2402.16829}
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# Acknowledgements
This work is supported by the "KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)" project funded by the [Knowledge for Change Program (KCP)](https://www.worldbank.org/en/programs/knowledge-for-change) of the World Bank - RA-P503405-RESE-TF0C3444.
The findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. |
LiheYoung/depth_anything_vitl14 | LiheYoung | "2024-01-25T08:07:57Z" | 224,116 | 37 | transformers | [
"transformers",
"pytorch",
"depth_anything",
"depth-estimation",
"arxiv:2401.10891",
"endpoints_compatible",
"region:us"
] | depth-estimation | "2024-01-23T07:33:54Z" | ---
tags:
- depth_anything
- depth-estimation
---
# Depth Anything model, large
The model card for our paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891).
You may also try our [demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything) and visit our [project page](https://depth-anything.github.io/).
## Installation
First, install the Depth Anything package:
```
git clone https://github.com/LiheYoung/Depth-Anything
cd Depth-Anything
pip install -r requirements.txt
```
## Usage
Here's how to run the model:
```python
import numpy as np
from PIL import Image
import cv2
import torch
from depth_anything.dpt import DepthAnything
from depth_anything.util.transform import Resize, NormalizeImage, PrepareForNet
from torchvision.transforms import Compose
model = DepthAnything.from_pretrained("LiheYoung/depth_anything_vitl14")
transform = Compose([
Resize(
width=518,
height=518,
resize_target=False,
keep_aspect_ratio=True,
ensure_multiple_of=14,
resize_method='lower_bound',
image_interpolation_method=cv2.INTER_CUBIC,
),
NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
PrepareForNet(),
])
image = Image.open("...")
image = np.array(image) / 255.0
image = transform({'image': image})['image']
image = torch.from_numpy(image).unsqueeze(0)
depth = model(image)
``` |
dbmdz/bert-base-german-uncased | dbmdz | "2023-09-06T22:19:33Z" | 223,354 | 18 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: de
license: mit
---
# 🤗 + 📚 dbmdz German BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources another German BERT models 🎉
# German BERT
## Stats
In addition to the recently released [German BERT](https://deepset.ai/german-bert)
model by [deepset](https://deepset.ai/) we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
a size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps
(sentence piece model for vocab generation) follow those used for training
[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial
sequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt)
| `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt)
## Usage
With Transformers >= 2.3 our German BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
```
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
Intel/dpt-hybrid-midas | Intel | "2024-02-09T08:58:56Z" | 222,683 | 68 | transformers | [
"transformers",
"pytorch",
"dpt",
"depth-estimation",
"vision",
"arxiv:2103.13413",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | depth-estimation | "2022-12-06T09:12:55Z" | ---
license: apache-2.0
tags:
- vision
- depth-estimation
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
model-index:
- name: dpt-hybrid-midas
results:
- task:
type: monocular-depth-estimation
name: Monocular Depth Estimation
dataset:
type: MIX-6
name: MIX-6
metrics:
- type: Zero-shot transfer
value: 11.06
name: Zero-shot transfer
config: Zero-shot transfer
verified: false
---
## Model Details: DPT-Hybrid (also known as MiDaS 3.0)
Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation.
It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. (2021) and first released in [this repository](https://github.com/isl-org/DPT).
DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for monocular depth estimation.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg)
This repository hosts the "hybrid" version of the model as stated in the paper. DPT-Hybrid diverges from DPT by using [ViT-hybrid](https://huggingface.co/google/vit-hybrid-base-bit-384) as a backbone and taking some activations from the backbone.
The model card has been written in combination by the Hugging Face team and Intel.
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | December 22, 2022 |
| Version | 1 |
| Type | Computer Vision - Monocular Depth Estimation |
| Paper or Other Resources | [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) and [GitHub Repo](https://github.com/isl-org/DPT) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/dpt-hybrid-midas/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the raw model for zero-shot monocular depth estimation. See the [model hub](https://huggingface.co/models?search=dpt) to look for fine-tuned versions on a task that interests you. |
| Primary intended users | Anyone doing monocular depth estimation |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is how to use this model for zero-shot depth estimation on an image:
```python
from PIL import Image
import numpy as np
import requests
import torch
from transformers import DPTImageProcessor, DPTForDepthEstimation
image_processor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas")
model = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas", low_cpu_mem_usage=True)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
depth.show()
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt).
| Factors | Description |
| ----------- | ----------- |
| Groups | Multiple datasets compiled together |
| Instrumentation | - |
| Environment | Inference completed on Intel Xeon Platinum 8280 CPU @ 2.70GHz with 8 physical cores and an NVIDIA RTX 2080 GPU. |
| Card Prompts | Model deployment on alternate hardware and software will change model performance |
| Metrics | Description |
| ----------- | ----------- |
| Model performance measures | Zero-shot Transfer |
| Decision thresholds | - |
| Approaches to uncertainty and variability | - |
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | The dataset is called MIX 6, and contains around 1.4M images. The model was initialized with ImageNet-pretrained weights.|
| Motivation | To build a robust monocular depth prediction network |
| Preprocessing | "We resize the image such that the longer side is 384 pixels and train on random square crops of size 384. ... We perform random horizontal flips for data augmentation." See [Ranftl et al. (2021)](https://arxiv.org/abs/2103.13413) for more details. |
## Quantitative Analyses
| Model | Training set | DIW WHDR | ETH3D AbsRel | Sintel AbsRel | KITTI δ>1.25 | NYU δ>1.25 | TUM δ>1.25 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| DPT - Large | MIX 6 | 10.82 (-13.2%) | 0.089 (-31.2%) | 0.270 (-17.5%) | 8.46 (-64.6%) | 8.32 (-12.9%) | 9.97 (-30.3%) |
| DPT - Hybrid | MIX 6 | 11.06 (-11.2%) | 0.093 (-27.6%) | 0.274 (-16.2%) | 11.56 (-51.6%) | 8.69 (-9.0%) | 10.89 (-23.2%) |
| MiDaS | MIX 6 | 12.95 (+3.9%) | 0.116 (-10.5%) | 0.329 (+0.5%) | 16.08 (-32.7%) | 8.71 (-8.8%) | 12.51 (-12.5%)
| MiDaS [30] | MIX 5 | 12.46 | 0.129 | 0.327 | 23.90 | 9.55 | 14.29 |
| Li [22] | MD [22] | 23.15 | 0.181 | 0.385 | 36.29 | 27.52 | 29.54 |
| Li [21] | MC [21] | 26.52 | 0.183 | 0.405 | 47.94 | 18.57 | 17.71 |
| Wang [40] | WS [40] | 19.09 | 0.205 | 0.390 | 31.92 | 29.57 | 20.18 |
| Xian [45] | RW [45] | 14.59 | 0.186 | 0.422 | 34.08 | 27.00 | 25.02 |
| Casser [5] | CS [8] | 32.80 | 0.235 | 0.422 | 21.15 | 39.58 | 37.18 |
Table 1. Comparison to the state of the art on monocular depth estimation. We evaluate zero-shot cross-dataset transfer according to the
protocol defined in [30]. Relative performance is computed with respect to the original MiDaS model [30]. Lower is better for all metrics. ([Ranftl et al., 2021](https://arxiv.org/abs/2103.13413))
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from multiple image datasets compiled together. |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of monocular depth image datasets. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | The extent of the risks involved by using the model remain unknown. |
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-13413,
author = {Ren{\'{e}} Ranftl and
Alexey Bochkovskiy and
Vladlen Koltun},
title = {Vision Transformers for Dense Prediction},
journal = {CoRR},
volume = {abs/2103.13413},
year = {2021},
url = {https://arxiv.org/abs/2103.13413},
eprinttype = {arXiv},
eprint = {2103.13413},
timestamp = {Wed, 07 Apr 2021 15:31:46 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-13413.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
alisawuffles/roberta-large-wanli | alisawuffles | "2023-06-14T04:58:48Z" | 222,264 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:alisawuffles/WANLI",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-30T20:00:10Z" | ---
language:
- en
tags:
- text-classification
widget:
- text: "I almost forgot to eat lunch.</s></s>I didn't forget to eat lunch."
- text: "I almost forgot to eat lunch.</s></s>I forgot to eat lunch."
- text: "I ate lunch.</s></s>I almost forgot to eat lunch."
datasets:
- alisawuffles/WANLI
---
This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset ([Liu et al., 2022](https://aclanthology.org/2022.findings-emnlp.508/)). It outperforms the `roberta-large-mnli` model on eight out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.
### How to use
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
model = RobertaForSequenceClassification.from_pretrained('alisawuffles/roberta-large-wanli')
tokenizer = RobertaTokenizer.from_pretrained('alisawuffles/roberta-large-wanli')
x = tokenizer("I almost forgot to eat lunch.", "I didn't forget to eat lunch.", return_tensors='pt', max_length=128, truncation=True)
logits = model(**x).logits
probs = logits.softmax(dim=1).squeeze(0)
label_id = torch.argmax(probs).item()
prediction = model.config.id2label[label_id]
```
### Citation
```
@inproceedings{liu-etal-2022-wanli,
title = "{WANLI}: Worker and {AI} Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.508",
pages = "6826--6847",
abstract = "A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 107,885 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI improves performance on eight out-of-domain test sets we consider, including by 11{\%} on HANS and 9{\%} on Adversarial NLI, compared to training on the 4x larger MultiNLI. Moreover, it continues to be more effective than MultiNLI augmented with other NLI datasets. Our results demonstrate the promise of leveraging natural language generation techniques and re-imagining the role of humans in the dataset creation process.",
}
``` |
jonatasgrosman/wav2vec2-large-xlsr-53-greek | jonatasgrosman | "2022-12-14T01:56:48Z" | 222,063 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"el",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: el
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Greek by Jonatas Grosman
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 11.62
- name: Test CER
type: cer
value: 3.36
---
# Fine-tuned XLSR-53 large model for speech recognition in Greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-greek")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "el"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-greek"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ΤΟ ΒΑΣΙΛΌΠΟΥΛΟ, ΠΟΥ ΜΟΙΆΖΕΙ ΛΕΟΝΤΑΡΆΚΙ ΚΑΙ ΑΕΤΟΥΔΆΚΙ | ΤΟ ΒΑΣΙΛΌΠΟΥΛΟ ΠΟΥ ΜΙΑΣΕ ΛΙΟΝΤΑΡΑΚΉ ΚΑΙ ΑΪΤΟΥΔΆΚΙ |
| ΣΥΝΆΜΑ ΞΕΠΡΌΒΑΛΑΝ ΑΠΌ ΜΈΣΑ ΑΠΌ ΤΑ ΔΈΝΤΡΑ, ΔΕΞΙΆ, ΑΡΜΑΤΩΜΈΝΟΙ ΚΑΒΑΛΑΡΈΟΙ. | ΣΥΝΆΜΑ ΚΑΙ ΤΡΌΒΑΛΑΝ ΑΠΌ ΜΈΣΑ ΑΠΌ ΤΑ ΔΈΝΤΡΑ ΔΕΞΙΆ ΑΡΜΑΤΩΜΈΝΟΙ ΚΑΒΑΛΑΡΈΟΙ |
| ΤΑ ΣΥΣΚΕΥΑΣΜΈΝΑ ΒΙΟΛΟΓΙΚΆ ΛΑΧΑΝΙΚΆ ΔΕΝ ΠΕΡΙΈΧΟΥΝ ΣΥΝΤΗΡΗΤΙΚΆ ΚΑΙ ΟΡΜΌΝΕΣ | ΤΑ ΣΥΣΚΕΦΑΣΜΈΝΑ ΒΙΟΛΟΓΙΚΆ ΛΑΧΑΝΙΚΆ ΔΕΝ ΠΕΡΙΈΧΟΥΝ ΣΙΔΗΡΗΤΙΚΆ ΚΑΙ ΟΡΜΌΝΕΣ |
| ΑΚΟΛΟΥΘΉΣΕΤΕ ΜΕ! | ΑΚΟΛΟΥΘΉΣΤΕ ΜΕ |
| ΚΑΙ ΠΟΎ ΜΠΟΡΏ ΝΑ ΤΟΝ ΒΡΩ; | Ε ΠΟΎ ΜΠΟΡΏ ΝΑ ΤΙ ΕΒΡΩ |
| ΝΑΙ! ΑΠΟΚΡΊΘΗΚΕ ΤΟ ΠΑΙΔΊ | ΝΑΙ ΑΠΟΚΡΊΘΗΚΕ ΤΟ ΠΑΙΔΊ |
| ΤΟ ΠΑΛΆΤΙ ΜΟΥ ΤΟ ΠΡΟΜΉΘΕΥΕ. | ΤΟ ΠΑΛΆΤΙ ΜΟΥ ΤΟ ΠΡΟΜΉΘΕΥΕ |
| ΉΛΘΕ ΜΉΝΥΜΑ ΑΠΌ ΤΟ ΘΕΊΟ ΒΑΣΙΛΙΆ; | ΉΛΘΑ ΜΕΊΝΕΙ ΜΕ ΑΠΌ ΤΟ ΘΕΊΟ ΒΑΣΊΛΙΑ |
| ΠΑΡΑΚΆΤΩ, ΈΝΑ ΡΥΆΚΙ ΜΟΥΡΜΟΎΡΙΖΕ ΓΛΥΚΆ, ΚΥΛΏΝΤΑΣ ΤΑ ΚΡΥΣΤΑΛΛΈΝΙΑ ΝΕΡΆ ΤΟΥ ΑΝΆΜΕΣΑ ΣΤΑ ΠΥΚΝΆ ΧΑΜΌΔΕΝΤΡΑ. | ΠΑΡΑΚΆΤΩ ΈΝΑ ΡΥΆΚΙ ΜΟΥΡΜΟΎΡΙΖΕ ΓΛΥΚΆ ΚΥΛΏΝΤΑΣ ΤΑ ΚΡΥΣΤΑΛΛΈΝΙΑ ΝΕΡΆ ΤΟΥ ΑΝΆΜΕΣΑ ΣΤΑ ΠΥΚΡΆ ΧΑΜΌΔΕΝΤΡΑ |
| ΠΡΆΓΜΑΤΙ, ΕΊΝΑΙ ΑΣΤΕΊΟ ΝΑ ΠΆΡΕΙ Ο ΔΙΆΒΟΛΟΣ | ΠΡΆΓΜΑΤΗ ΕΊΝΑΙ ΑΣΤΕΊΟ ΝΑ ΠΆΡΕΙ Ο ΔΙΆΒΟΛΟΣ |
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "el"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-greek"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| lighteternal/wav2vec2-large-xlsr-53-greek | **10.13%** | **2.66%** |
| jonatasgrosman/wav2vec2-large-xlsr-53-greek | 11.62% | 3.36% |
| vasilis/wav2vec2-large-xlsr-53-greek | 19.09% | 5.88% |
| PereLluis13/wav2vec2-large-xlsr-53-greek | 20.16% | 5.71% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-greek,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {G}reek},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-greek}},
year={2021}
}
``` |
cagliostrolab/animagine-xl-3.1 | cagliostrolab | "2024-03-18T11:11:14Z" | 221,298 | 474 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"base_model:cagliostrolab/animagine-xl-3.0",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-03-13T09:40:48Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-xl
base_model: cagliostrolab/animagine-xl-3.0
widget:
- text: 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck, masterpiece, best quality, very aesthetic, absurdes
parameter:
negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
example_title: 1girl
- text: 1boy, male focus, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck, masterpiece, best quality, very aesthetic, absurdes
parameter:
negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
example_title: 1boy
---
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: 'Helvetica Neue', sans-serif;
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #7ed56f, #28b485);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 2em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 10px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 10px;
transition: transform .7s;
/* Smooth transition for the container */
}
.custom-image-container:hover {
transform: scale(1.05);
/* Scale the container on hover */
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.nsfw-filter {
filter: blur(8px); /* Apply a blur effect */
transition: filter 0.3s ease; /* Smooth transition for the blur effect */
}
.custom-image-container:hover .nsfw-filter {
filter: none; /* Remove the blur effect on hover */
}
.overlay {
position: absolute;
bottom: 0;
left: 0;
right: 0;
color: white;
width: 100%;
height: 40%;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
font-size: 1vw;
font-style: bold;
text-align: center;
opacity: 0;
/* Keep the text fully opaque */
background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
transition: opacity .5s;
}
.custom-image-container:hover .overlay {
opacity: 1;
}
.overlay-text {
background: linear-gradient(45deg, #7ed56f, #28b485);
-webkit-background-clip: text;
color: transparent;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
.overlay-subtext {
font-size: 0.75em;
margin-top: 0.5em;
font-style: italic;
}
.overlay,
.overlay-subtext {
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
}
</style>
<h1 class="title">
<span>Animagine XL 3.1</span>
</h1>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/yq_5AWegnLsGyCYyqJ-1G.png" alt="sample1">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/sp6w1elvXVTbckkU74v3o.png" alt="sample4">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/OYBuX1XzffN7Pxi4c75JV.png" alt="sample2">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/ytT3Oaf-atbqrnPIqz_dq.png" alt="sample3">
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/0oRq204okFxRGECmrIK6d.png" alt="sample1">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/DW51m0HlDuAlXwu8H8bIS.png" alt="sample4">
</div>
</td>
</tr>
</table>
**Animagine XL 3.1** is an update in the Animagine XL V3 series, enhancing the previous version, Animagine XL 3.0. This open-source, anime-themed text-to-image model has been improved for generating anime-style images with higher quality. It includes a broader range of characters from well-known anime series, an optimized dataset, and new aesthetic tags for better image creation. Built on Stable Diffusion XL, Animagine XL 3.1 aims to be a valuable resource for anime fans, artists, and content creators by producing accurate and detailed representations of anime characters.
## Model Details
- **Developed by**: [Cagliostro Research Lab](https://huggingface.co/cagliostrolab)
- **In collaboration with**: [SeaArt.ai](https://www.seaart.ai/)
- **Model type**: Diffusion-based text-to-image generative model
- **Model Description**: Animagine XL 3.1 generates high-quality anime images from textual prompts. It boasts enhanced hand anatomy, improved concept understanding, and advanced prompt interpretation.
- **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
- **Fine-tuned from**: [Animagine XL 3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0)
## Gradio & Colab Integration
Try the demo powered by Gradio in Huggingface Spaces: [![Open In Spaces](https://img.shields.io/badge/🤗-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/cagliostrolab/animagine-xl-3.1)
Or open the demo in Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/spaces/cagliostrolab/animagine-xl-3.1/blob/main/demo.ipynb)
## 🧨 Diffusers Installation
First install the required libraries:
```bash
pip install diffusers transformers accelerate safetensors --upgrade
```
Then run image generation with the following example code:
```python
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"cagliostrolab/animagine-xl-3.1",
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe.to('cuda')
prompt = "1girl, souryuu asuka langley, neon genesis evangelion, solo, upper body, v, smile, looking at viewer, outdoors, night"
negative_prompt = "nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=832,
height=1216,
guidance_scale=7,
num_inference_steps=28
).images[0]
image.save("./output/asuka_test.png")
```
## Usage Guidelines
### Tag Ordering
For optimal results, it's recommended to follow the structured prompt template because we train the model like this:
```
1girl/1boy, character name, from what series, everything else in any order.
```
## Special Tags
Animagine XL 3.1 utilizes special tags to steer the result toward quality, rating, creation date and aesthetic. While the model can generate images without these tags, using them can help achieve better results.
### Quality Modifiers
Quality tags now consider both scores and post ratings to ensure a balanced quality distribution. We've refined labels for greater clarity, such as changing 'high quality' to 'great quality'.
| Quality Modifier | Score Criterion |
|------------------|-------------------|
| `masterpiece` | > 95% |
| `best quality` | > 85% & ≤ 95% |
| `great quality` | > 75% & ≤ 85% |
| `good quality` | > 50% & ≤ 75% |
| `normal quality` | > 25% & ≤ 50% |
| `low quality` | > 10% & ≤ 25% |
| `worst quality` | ≤ 10% |
### Rating Modifiers
We've also streamlined our rating tags for simplicity and clarity, aiming to establish global rules that can be applied across different models. For example, the tag 'rating: general' is now simply 'general', and 'rating: sensitive' has been condensed to 'sensitive'.
| Rating Modifier | Rating Criterion |
|-------------------|------------------|
| `safe` | General |
| `sensitive` | Sensitive |
| `nsfw` | Questionable |
| `explicit, nsfw` | Explicit |
### Year Modifier
We've also redefined the year range to steer results towards specific modern or vintage anime art styles more accurately. This update simplifies the range, focusing on relevance to current and past eras.
| Year Tag | Year Range |
|----------|------------------|
| `newest` | 2021 to 2024 |
| `recent` | 2018 to 2020 |
| `mid` | 2015 to 2017 |
| `early` | 2011 to 2014 |
| `oldest` | 2005 to 2010 |
### Aesthetic Tags
We've enhanced our tagging system with aesthetic tags to refine content categorization based on visual appeal. These tags are derived from evaluations made by a specialized ViT (Vision Transformer) image classification model, specifically trained on anime data. For this purpose, we utilized the model [shadowlilac/aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2), which assesses the aesthetic value of content before it undergoes training. This ensures that each piece of content is not only relevant and accurate but also visually appealing.
| Aesthetic Tag | Score Range |
|-------------------|-------------------|
| `very aesthetic` | > 0.71 |
| `aesthetic` | > 0.45 & < 0.71 |
| `displeasing` | > 0.27 & < 0.45 |
| `very displeasing`| ≤ 0.27 |
## Recommended settings
To guide the model towards generating high-aesthetic images, use negative prompts like:
```
nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
```
For higher quality outcomes, prepend prompts with:
```
masterpiece, best quality, very aesthetic, absurdres
```
it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler.
### Multi Aspect Resolution
This model supports generating images at the following dimensions:
| Dimensions | Aspect Ratio |
|-------------------|-----------------|
| `1024 x 1024` | 1:1 Square |
| `1152 x 896` | 9:7 |
| `896 x 1152` | 7:9 |
| `1216 x 832` | 19:13 |
| `832 x 1216` | 13:19 |
| `1344 x 768` | 7:4 Horizontal |
| `768 x 1344` | 4:7 Vertical |
| `1536 x 640` | 12:5 Horizontal |
| `640 x 1536` | 5:12 Vertical |
## Training and Hyperparameters
**Animagine XL 3.1** was trained on 2x A100 80GB GPUs for approximately 15 days, totaling over 350 GPU hours. The training process consisted of three stages:
- **Pretraining**: Utilized a data-rich collection of 870k ordered and tagged images to increase Animagine XL 3.0's model knowledge.
- **Finetuning - First Stage**: Employed labeled and curated aesthetic datasets to refine the broken U-Net after pretraining.
- **Finetuning - Second Stage**: Utilized labeled and curated aesthetic datasets to refine the model's art style and improve hand and anatomy rendering.
### Hyperparameters
| Stage | Epochs | UNet lr | Train Text Encoder | Batch Size | Noise Offset | Optimizer | LR Scheduler | Grad Acc Steps | GPUs |
|--------------------------|--------|---------|--------------------|------------|--------------|------------|-------------------------------|----------------|------|
| **Pretraining** | 10 | 1e-5 | True | 16 | N/A | AdamW | Cosine Annealing Warm Restart | 3 | 2 |
| **Finetuning 1st Stage** | 10 | 2e-6 | False | 48 | 0.0357 | Adafactor | Constant with Warmup | 1 | 1 |
| **Finetuning 2nd Stage** | 15 | 1e-6 | False | 48 | 0.0357 | Adafactor | Constant with Warmup | 1 | 1 |
## Model Comparison (Pretraining only)
### Training Config
| Configuration Item | Animagine XL 3.0 | Animagine XL 3.1 |
|---------------------------------|------------------------------------------|------------------------------------------------|
| **GPU** | 2 x A100 80G | 2 x A100 80G |
| **Dataset** | 1,271,990 | 873,504 |
| **Shuffle Separator** | True | True |
| **Num Epochs** | 10 | 10 |
| **Learning Rate** | 7.5e-6 | 1e-5 |
| **Text Encoder Learning Rate** | 3.75e-6 | 1e-5 |
| **Effective Batch Size** | 48 x 1 x 2 | 16 x 3 x 2 |
| **Optimizer** | Adafactor | AdamW |
| **Optimizer Args** | Scale Parameter: False, Relative Step: False, Warmup Init: False | Weight Decay: 0.1, Betas: (0.9, 0.99) |
| **LR Scheduler** | Constant with Warmup | Cosine Annealing Warm Restart |
| **LR Scheduler Args** | Warmup Steps: 100 | Num Cycles: 10, Min LR: 1e-6, LR Decay: 0.9, First Cycle Steps: 9,099 |
Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook
### Acknowledgements
The development and release of Animagine XL 3.1 would not have been possible without the invaluable contributions and support from the following individuals and organizations:
- **[SeaArt.ai](https://www.seaart.ai/)**: Our collaboration partner and sponsor.
- **[Shadow Lilac](https://huggingface.co/shadowlilac)**: For providing the aesthetic classification model, [aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2).
- **[Derrian Distro](https://github.com/derrian-distro)**: For their custom learning rate scheduler, adapted from [LoRA Easy Training Scripts](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts/blob/main/custom_scheduler/LoraEasyCustomOptimizer/CustomOptimizers.py).
- **[Kohya SS](https://github.com/kohya-ss)**: For their comprehensive training scripts.
- **Cagliostrolab Collaborators**: For their dedication to model training, project management, and data curation.
- **Early Testers**: For their valuable feedback and quality assurance efforts.
- **NovelAI**: For their innovative approach to aesthetic tagging, which served as an inspiration for our implementation.
- **KBlueLeaf**: For providing inspiration in balancing quality tags distribution and managing tags based on [Hakubooru Metainfo](https://github.com/KohakuBlueleaf/HakuBooru/blob/main/hakubooru/metainfo.py)
Thank you all for your support and expertise in pushing the boundaries of anime-style image generation.
## Collaborators
- [Linaqruf](https://huggingface.co/Linaqruf)
- [ItsMeBell](https://huggingface.co/ItsMeBell)
- [Asahina2K](https://huggingface.co/Asahina2K)
- [DamarJati](https://huggingface.co/DamarJati)
- [Zwicky18](https://huggingface.co/Zwicky18)
- [Scipius2121](https://huggingface.co/Scipius2121)
- [Raelina](https://huggingface.co/Raelina)
- [Kayfahaarukku](https://huggingface.co/kayfahaarukku)
- [Kriz](https://huggingface.co/Kr1SsSzz)
## Limitations
While Animagine XL 3.1 represents a significant advancement in anime-style image generation, it is important to acknowledge its limitations:
1. **Anime-Focused**: This model is specifically designed for generating anime-style images and is not suitable for creating realistic photos.
2. **Prompt Complexity**: This model may not be suitable for users who expect high-quality results from short or simple prompts. The training focus was on concept understanding rather than aesthetic refinement, which may require more detailed and specific prompts to achieve the desired output.
3. **Prompt Format**: Animagine XL 3.1 is optimized for Danbooru-style tags rather than natural language prompts. For best results, users are encouraged to format their prompts using the appropriate tags and syntax.
4. **Anatomy and Hand Rendering**: Despite the improvements made in anatomy and hand rendering, there may still be instances where the model produces suboptimal results in these areas.
5. **Dataset Size**: The dataset used for training Animagine XL 3.1 consists of approximately 870,000 images. When combined with the previous iteration's dataset (1.2 million), the total training data amounts to around 2.1 million images. While substantial, this dataset size may still be considered limited in scope for an "ultimate" anime model.
6. **NSFW Content**: Animagine XL 3.1 has been designed to generate more balanced NSFW content. However, it is important to note that the model may still produce NSFW results, even if not explicitly prompted.
By acknowledging these limitations, we aim to provide transparency and set realistic expectations for users of Animagine XL 3.1. Despite these constraints, we believe that the model represents a significant step forward in anime-style image generation and offers a powerful tool for artists, designers, and enthusiasts alike.
## License
Based on Animagine XL 3.0, Animagine XL 3.1 falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points:
1. **Modification Sharing:** If you modify Animagine XL 3.1, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
The choice of this license aims to keep Animagine XL 3.1 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.
## Cagliostro Lab Discord Server
Finally Cagliostro Lab Server open to public
https://discord.gg/cqh9tZgbGc
Feel free to join our discord server |
sai17/cards_bottom_right_swin-tiny-patch4-window7-224-finetuned-v2 | sai17 | "2024-02-17T01:44:22Z" | 220,418 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-15T15:30:28Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cards_bottom_right_swin-tiny-patch4-window7-224-finetuned-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6078575555438837
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards_bottom_right_swin-tiny-patch4-window7-224-finetuned-v2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9317
- Accuracy: 0.6079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4965 | 1.0 | 1338 | 1.3516 | 0.4156 |
| 1.4486 | 2.0 | 2677 | 1.1784 | 0.4938 |
| 1.4384 | 3.0 | 4015 | 1.1050 | 0.5223 |
| 1.4538 | 4.0 | 5354 | 1.0751 | 0.5433 |
| 1.3928 | 5.0 | 6692 | 1.0604 | 0.5440 |
| 1.4148 | 6.0 | 8031 | 1.0459 | 0.5523 |
| 1.3921 | 7.0 | 9369 | 1.0464 | 0.5501 |
| 1.3812 | 8.0 | 10708 | 1.0461 | 0.5491 |
| 1.3494 | 9.0 | 12046 | 1.0445 | 0.5486 |
| 1.3555 | 10.0 | 13385 | 0.9973 | 0.5693 |
| 1.3303 | 11.0 | 14723 | 0.9952 | 0.5719 |
| 1.3575 | 12.0 | 16062 | 1.0317 | 0.5574 |
| 1.3129 | 13.0 | 17400 | 0.9851 | 0.5813 |
| 1.3439 | 14.0 | 18739 | 1.0510 | 0.5523 |
| 1.3371 | 15.0 | 20077 | 0.9820 | 0.5795 |
| 1.2835 | 16.0 | 21416 | 0.9886 | 0.5738 |
| 1.3002 | 17.0 | 22754 | 0.9685 | 0.5869 |
| 1.289 | 18.0 | 24093 | 0.9519 | 0.5941 |
| 1.3007 | 19.0 | 25431 | 0.9855 | 0.5800 |
| 1.2927 | 20.0 | 26770 | 0.9499 | 0.5925 |
| 1.2985 | 21.0 | 28108 | 0.9669 | 0.5854 |
| 1.2957 | 22.0 | 29447 | 0.9551 | 0.5903 |
| 1.2579 | 23.0 | 30785 | 0.9300 | 0.6053 |
| 1.2475 | 24.0 | 32124 | 0.9296 | 0.6049 |
| 1.2227 | 25.0 | 33462 | 0.9317 | 0.6079 |
| 1.2069 | 26.0 | 34801 | 0.9609 | 0.5887 |
| 1.2156 | 27.0 | 36139 | 0.9297 | 0.6052 |
| 1.25 | 28.0 | 37478 | 0.9300 | 0.6062 |
| 1.2394 | 29.0 | 38816 | 0.9238 | 0.6071 |
| 1.209 | 29.99 | 40140 | 0.9284 | 0.6064 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
sai17/cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-dough_100_epochs | sai17 | "2024-03-08T05:22:27Z" | 220,412 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-04T05:31:14Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-dough_100_epochs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5946802405369663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-dough_100_epochs
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0025
- Accuracy: 0.5947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.6956 | 1.0 | 1252 | 1.4843 | 0.3970 |
| 1.5633 | 2.0 | 2504 | 1.2584 | 0.4782 |
| 1.5568 | 3.0 | 3756 | 1.1976 | 0.4918 |
| 1.4727 | 4.0 | 5009 | 1.1884 | 0.4916 |
| 1.468 | 5.0 | 6261 | 1.1909 | 0.4889 |
| 1.4663 | 6.0 | 7513 | 1.1263 | 0.5288 |
| 1.4409 | 7.0 | 8765 | 1.0967 | 0.5441 |
| 1.4329 | 8.0 | 10018 | 1.0976 | 0.5388 |
| 1.4842 | 9.0 | 11270 | 1.1076 | 0.5315 |
| 1.4253 | 10.0 | 12522 | 1.0634 | 0.5511 |
| 1.3888 | 11.0 | 13774 | 1.0489 | 0.5634 |
| 1.3681 | 12.0 | 15027 | 1.0663 | 0.5567 |
| 1.3802 | 13.0 | 16279 | 1.0304 | 0.5667 |
| 1.4016 | 14.0 | 17531 | 1.0592 | 0.5518 |
| 1.376 | 15.0 | 18783 | 1.0080 | 0.5776 |
| 1.3539 | 16.0 | 20036 | 1.0103 | 0.5742 |
| 1.3725 | 17.0 | 21288 | 1.0261 | 0.5636 |
| 1.3104 | 18.0 | 22540 | 1.0304 | 0.5686 |
| 1.3448 | 19.0 | 23792 | 1.0184 | 0.5687 |
| 1.3479 | 20.0 | 25045 | 0.9968 | 0.5809 |
| 1.3517 | 21.0 | 26297 | 1.1350 | 0.5182 |
| 1.3367 | 22.0 | 27549 | 0.9835 | 0.5867 |
| 1.3002 | 23.0 | 28801 | 1.0193 | 0.5736 |
| 1.3238 | 24.0 | 30054 | 0.9820 | 0.5875 |
| 1.2865 | 25.0 | 31306 | 1.0267 | 0.5617 |
| 1.3029 | 26.0 | 32558 | 1.0086 | 0.5730 |
| 1.3173 | 27.0 | 33810 | 0.9750 | 0.5924 |
| 1.297 | 28.0 | 35063 | 0.9851 | 0.5848 |
| 1.3105 | 29.0 | 36315 | 1.0306 | 0.5685 |
| 1.3477 | 30.0 | 37567 | 0.9977 | 0.5845 |
| 1.2565 | 31.0 | 38819 | 0.9900 | 0.5851 |
| 1.2657 | 32.0 | 40072 | 1.0137 | 0.5862 |
| 1.2911 | 33.0 | 41324 | 0.9947 | 0.5889 |
| 1.2539 | 34.0 | 42576 | 0.9821 | 0.5914 |
| 1.2441 | 35.0 | 43828 | 1.0296 | 0.5763 |
| 1.2176 | 36.0 | 45081 | 1.0350 | 0.5806 |
| 1.25 | 37.0 | 46333 | 1.0195 | 0.5779 |
| 1.2647 | 38.0 | 47585 | 1.0021 | 0.5903 |
| 1.2428 | 39.0 | 48837 | 1.0087 | 0.5892 |
| 1.2364 | 40.0 | 50090 | 1.0025 | 0.5947 |
| 1.2083 | 41.0 | 51342 | 1.0427 | 0.5862 |
| 1.2002 | 42.0 | 52594 | 1.0303 | 0.5878 |
| 1.2071 | 43.0 | 53846 | 1.0190 | 0.5909 |
| 1.1536 | 44.0 | 55099 | 1.0314 | 0.5920 |
| 1.2029 | 45.0 | 56351 | 1.0570 | 0.5839 |
| 1.2249 | 46.0 | 57603 | 1.0508 | 0.5828 |
| 1.1913 | 47.0 | 58855 | 1.0493 | 0.5853 |
| 1.1938 | 48.0 | 60108 | 1.0575 | 0.5857 |
| 1.1724 | 49.0 | 61360 | 1.0700 | 0.5905 |
| 1.1536 | 50.0 | 62612 | 1.0841 | 0.5853 |
| 1.1239 | 51.0 | 63864 | 1.0803 | 0.5865 |
| 1.1743 | 52.0 | 65117 | 1.0864 | 0.5880 |
| 1.1414 | 53.0 | 66369 | 1.1224 | 0.5819 |
| 1.1411 | 54.0 | 67621 | 1.1316 | 0.5780 |
| 1.1029 | 55.0 | 68873 | 1.1070 | 0.5860 |
| 1.1353 | 56.0 | 70126 | 1.1247 | 0.5847 |
| 1.1293 | 57.0 | 71378 | 1.1279 | 0.5805 |
| 1.1335 | 58.0 | 72630 | 1.1482 | 0.5812 |
| 1.1157 | 59.0 | 73882 | 1.1960 | 0.5674 |
| 1.0891 | 60.0 | 75135 | 1.1414 | 0.5848 |
| 1.1299 | 61.0 | 76387 | 1.1658 | 0.5790 |
| 1.0828 | 62.0 | 77639 | 1.1753 | 0.5806 |
| 1.0866 | 63.0 | 78891 | 1.1767 | 0.5755 |
| 1.0721 | 64.0 | 80144 | 1.1861 | 0.5808 |
| 1.0682 | 65.0 | 81396 | 1.2083 | 0.5749 |
| 1.0747 | 66.0 | 82648 | 1.2204 | 0.5755 |
| 1.0902 | 67.0 | 83900 | 1.2175 | 0.5750 |
| 1.0381 | 68.0 | 85153 | 1.2445 | 0.5738 |
| 1.049 | 69.0 | 86405 | 1.2674 | 0.5707 |
| 1.0501 | 70.0 | 87657 | 1.2602 | 0.5740 |
| 1.0117 | 71.0 | 88909 | 1.2549 | 0.5687 |
| 1.0179 | 72.0 | 90162 | 1.3010 | 0.5690 |
| 1.0788 | 73.0 | 91414 | 1.2723 | 0.5726 |
| 1.0234 | 74.0 | 92666 | 1.3162 | 0.5717 |
| 1.0325 | 75.0 | 93918 | 1.3136 | 0.5692 |
| 1.0079 | 76.0 | 95171 | 1.3337 | 0.5655 |
| 1.058 | 77.0 | 96423 | 1.3171 | 0.5719 |
| 0.9968 | 78.0 | 97675 | 1.3470 | 0.5693 |
| 1.0217 | 79.0 | 98927 | 1.3418 | 0.5733 |
| 1.0124 | 80.0 | 100180 | 1.3518 | 0.5700 |
| 0.9823 | 81.0 | 101432 | 1.3646 | 0.5700 |
| 0.9627 | 82.0 | 102684 | 1.3658 | 0.5686 |
| 0.9773 | 83.0 | 103936 | 1.3811 | 0.5674 |
| 0.9855 | 84.0 | 105189 | 1.4082 | 0.5638 |
| 0.9928 | 85.0 | 106441 | 1.3877 | 0.5612 |
| 1.0025 | 86.0 | 107693 | 1.3925 | 0.5653 |
| 0.9583 | 87.0 | 108945 | 1.4313 | 0.5625 |
| 0.977 | 88.0 | 110198 | 1.4153 | 0.5651 |
| 0.9825 | 89.0 | 111450 | 1.4426 | 0.5619 |
| 0.9315 | 90.0 | 112702 | 1.4376 | 0.5643 |
| 0.8916 | 91.0 | 113954 | 1.4630 | 0.5618 |
| 0.9495 | 92.0 | 115207 | 1.4501 | 0.5627 |
| 0.9372 | 93.0 | 116459 | 1.4606 | 0.5622 |
| 0.9284 | 94.0 | 117711 | 1.4725 | 0.5608 |
| 0.9266 | 95.0 | 118963 | 1.4680 | 0.5607 |
| 0.8858 | 96.0 | 120216 | 1.4705 | 0.5626 |
| 0.9025 | 97.0 | 121468 | 1.4818 | 0.5616 |
| 0.902 | 98.0 | 122720 | 1.4871 | 0.5606 |
| 0.8961 | 99.0 | 123972 | 1.4881 | 0.5612 |
| 0.9204 | 99.98 | 125200 | 1.4894 | 0.5609 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.13.3
|
sai17/cards-top_right_swin-tiny-patch4-window7-224-finetuned-v2_more_data | sai17 | "2024-02-20T17:23:19Z" | 220,384 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-19T09:26:21Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cards-top_right_swin-tiny-patch4-window7-224-finetuned-v2_more_data
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6269272417882741
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards-top_right_swin-tiny-patch4-window7-224-finetuned-v2_more_data
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9268
- Accuracy: 0.6269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4585 | 1.0 | 1363 | 1.2999 | 0.4337 |
| 1.4211 | 2.0 | 2726 | 1.1663 | 0.4927 |
| 1.4203 | 3.0 | 4089 | 1.0770 | 0.5312 |
| 1.4669 | 4.0 | 5453 | 1.0744 | 0.5496 |
| 1.3781 | 5.0 | 6816 | 1.0245 | 0.5599 |
| 1.3852 | 6.0 | 8179 | 1.0645 | 0.5402 |
| 1.3407 | 7.0 | 9542 | 1.0011 | 0.5696 |
| 1.3727 | 8.0 | 10906 | 0.9898 | 0.5801 |
| 1.328 | 9.0 | 12269 | 0.9965 | 0.5738 |
| 1.3374 | 10.0 | 13632 | 0.9722 | 0.5874 |
| 1.3513 | 11.0 | 14995 | 0.9632 | 0.5873 |
| 1.3728 | 12.0 | 16359 | 0.9818 | 0.5802 |
| 1.3289 | 13.0 | 17722 | 0.9845 | 0.5729 |
| 1.3219 | 14.0 | 19085 | 0.9633 | 0.5881 |
| 1.2893 | 15.0 | 20448 | 0.9312 | 0.6004 |
| 1.3088 | 16.0 | 21812 | 0.9537 | 0.5903 |
| 1.3252 | 17.0 | 23175 | 0.9432 | 0.5986 |
| 1.3424 | 18.0 | 24538 | 0.9291 | 0.5979 |
| 1.3077 | 19.0 | 25901 | 0.9245 | 0.6020 |
| 1.2466 | 20.0 | 27265 | 0.9304 | 0.6039 |
| 1.2767 | 21.0 | 28628 | 0.9122 | 0.6099 |
| 1.2553 | 22.0 | 29991 | 0.9312 | 0.6005 |
| 1.2698 | 23.0 | 31354 | 0.9137 | 0.6092 |
| 1.2591 | 24.0 | 32718 | 0.9113 | 0.6134 |
| 1.277 | 25.0 | 34081 | 0.9095 | 0.6142 |
| 1.2742 | 26.0 | 35444 | 0.9227 | 0.6100 |
| 1.222 | 27.0 | 36807 | 0.9090 | 0.6147 |
| 1.2368 | 28.0 | 38171 | 0.9020 | 0.6172 |
| 1.198 | 29.0 | 39534 | 0.9071 | 0.6157 |
| 1.2076 | 30.0 | 40897 | 0.9031 | 0.6214 |
| 1.212 | 31.0 | 42260 | 0.9136 | 0.6175 |
| 1.2105 | 32.0 | 43624 | 0.9170 | 0.6151 |
| 1.2687 | 33.0 | 44987 | 0.9047 | 0.6186 |
| 1.2038 | 34.0 | 46350 | 0.9061 | 0.6190 |
| 1.1957 | 35.0 | 47713 | 0.9052 | 0.6255 |
| 1.1962 | 36.0 | 49077 | 0.9057 | 0.6210 |
| 1.1866 | 37.0 | 50440 | 0.9105 | 0.6227 |
| 1.2545 | 38.0 | 51803 | 0.9173 | 0.6206 |
| 1.1642 | 39.0 | 53166 | 0.9120 | 0.6239 |
| 1.1711 | 40.0 | 54530 | 0.9235 | 0.6177 |
| 1.2339 | 41.0 | 55893 | 0.9295 | 0.6143 |
| 1.1132 | 42.0 | 57256 | 0.9143 | 0.6234 |
| 1.1977 | 43.0 | 58619 | 0.9163 | 0.6256 |
| 1.1617 | 44.0 | 59983 | 0.9246 | 0.6233 |
| 1.1357 | 45.0 | 61346 | 0.9196 | 0.6255 |
| 1.1362 | 46.0 | 62709 | 0.9221 | 0.6259 |
| 1.1472 | 47.0 | 64072 | 0.9206 | 0.6263 |
| 1.184 | 48.0 | 65436 | 0.9282 | 0.6256 |
| 1.1096 | 49.0 | 66799 | 0.9252 | 0.6269 |
| 1.1425 | 49.99 | 68150 | 0.9268 | 0.6269 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
aipicasso/emi | aipicasso | "2023-09-26T21:36:30Z" | 220,180 | 93 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2307.01952",
"arxiv:2212.03860",
"license:openrail++",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-09-24T05:29:37Z" | ---
extra_gated_prompt: このモデルをこのページからダウンロードするためにはHugging Faceに登録された情報を提供する必要があります。この提供された情報は画像生成AIを活用する情報を案内するために使われます。 To download this model from this page, you need to provide information registered with Hugging Face. The information provided will be used to guide you on how to utilize the image-generation AI.
license: openrail++
tags:
- stable-diffusion
- text-to-image
inference: false
library_name: diffusers
---
# Emi Model Card
![eyecatch.jpg](eyecatch.jpg)
[Original(PNG)](eyecatch.png)
English: [Click Here](README_en.md)
# はじめに
Emi (Ethereal master of illustration) は、
最先端の開発機材H100と画像生成Stable Diffusion XL 1.0を用いて
AI Picasso社が開発したAIアートに特化した画像生成AIです。
このモデルの特徴として、Danbooruなどにある無断転載画像を学習していないことがあげられます。
# ライセンスについて
ライセンスについては、これまでとは違い、 CreativeML Open RAIL++-M License です。
したがって、**商用利用可能**です。
これは次のように判断したためです。
- 画像生成AIが普及するに伴い、創作業界に悪影響を及ぼさないように、マナーを守る人が増えてきたため
- 他の画像生成AIが商用可能である以上、あまり非商用ライセンスである実効性がなくなってきたため
# 使い方
[ここ](https://huggingface.co/spaces/aipicasso/emi-latest-demo)からデモを利用することができます。
本格的に利用する人は[ここ](emi.safetensors)からモデルをダウンロードできます。
通常版で生成がうまく行かない場合は、[安定版](emi_stable.safetensors)をお使いください。
# シンプルな作品例
![example_1.jpg](example_1.jpg)
```
positive prompt: anime artwork, anime style, (1girl), (black bob hair:1.5), brown eyes, red maples, sky, ((transparent))
negative prompt: (embedding:unaestheticXLv31:0.5), photo, deformed, realism, disfigured, low contrast, bad hand
```
![example_2.png](example_2.png)
```
positive prompt: monochrome, black and white, (japanese manga), mount fuji
negative prompt: (embedding:unaestheticXLv31:0.5), photo, deformed, realism, disfigured, low contrast, bad hand
```
![example_3.jpg](example_3.jpg)
```
positive prompt: (1man), focus, white wavy short hair, blue eyes, black shirt, white background, simple background
negative prompt: (embedding:unaestheticXLv31:0.5), photo, deformed, realism, disfigured, low contrast, bad hand
```
# モデルの出力向上について
- 確実にアニメ調のイラストを出したいときは、anime artwork, anime styleとプロンプトの先頭に入れてください。
- プロンプトにtransparentという言葉を入れると、より最近の画風になります。
- 全身 (full body) を描くとうまく行かない場合もあるため、そのときは[安定版](emi_stable.safetensors)をお試しください。
- 使えるプロンプトはWaifu Diffusionと同じです。また、Stable Diffusionのように使うこともできます。
- ネガティブプロンプトに[Textual Inversion](https://civitai.com/models/119032/unaestheticxl-or-negative-ti)を使用することをおすすめします。
- 手が不安定なため、[DreamShaper XL1.0](https://civitai.com/models/112902?modelVersionId=126688)などの実写系モデルとのマージをおすすめします。
- ChatGPTを用いてプロンプトを洗練すると、自分の枠を超えた作品に出会えます。
- 最新のComfyUIにあるFreeUノード、または[Web UIの拡張機能](https://github.com/ljleb/sd-webui-freeu)を次のパラメータで使うとさらに出力が上がる可能性があります。次の画像はFreeUを使った例です。
- b1 = 1.1, b2 = 1.2, s1 = 0.6, s2 = 0.4 [report](https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw)
![example_4.png](example_4.png)
# 法律について
本モデルは日本にて作成されました。したがって、日本の法律が適用されます。
本モデルの学習は、著作権法第30条の4に基づき、合法であると主張します。
また、本モデルの配布については、著作権法や刑法175条に照らしてみても、
正犯や幇助犯にも該当しないと主張します。詳しくは柿沼弁護士の[見解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を御覧ください。
ただし、ライセンスにもある通り、本モデルの生成物は各種法令に従って取り扱って下さい。
# 連絡先
support@aipicasso.app
以下、一般的なモデルカードの日本語訳です。
## モデル詳細
- **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル
- **言語:** 日本語
- **ライセンス:** [CreativeML Open RAIL++-M License](LICENSE.md)
- **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2307.01952) と [OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip)、[CLIP-L](https://github.com/openai/CLIP) です。
- **補足:**
- **参考文献:**
```bibtex
@misc{podell2023sdxl,
title={SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis},
author={Dustin Podell and Zion English and Kyle Lacey and Andreas Blattmann and Tim Dockhorn and Jonas Müller and Joe Penna and Robin Rombach},
year={2023},
eprint={2307.01952},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## モデルの使用例
Stable Diffusion XL 1.0と同じ使い方です。
たくさんの方法がありますが、3つのパターンを提供します。
- ComfyUI
- Fooocus
- Diffusers
### ComfyUIやFooocusの場合
Stable Diffusion XL 1.0 の使い方と同じく、safetensor形式のモデルファイルを使ってください。
詳しいインストール方法は、[こちらの記事](https://note.com/it_navi/n/n723d93bedd64)を参照してください。
### Diffusersの場合
[🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。
まずは、以下のスクリプトを実行し、ライブラリをいれてください。
```bash
pip install invisible_watermark transformers accelerate safetensors diffusers
```
次のスクリプトを実行し、画像を生成してください。
```python
from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler
import torch
model_id = "aipicasso/emi"
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionXLPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "1girl, sunflowers, brown bob hair, brown eyes, sky, transparent"
images = pipe(prompt, num_inference_steps=20).images
images[0].save("girl.png")
```
複雑な操作は[デモのソースコード](https://huggingface.co/spaces/aipicasso/emi-latest-demo/blob/main/app.py)を参考にしてください。
#### 想定される用途
- イラストや漫画、アニメの作画補助
- 商用・非商用は問わない
- 依頼の際のクリエイターとのコミュニケーション
- 画像生成サービスの商用提供
- 生成物の取り扱いには注意して使ってください。
- 自己表現
- このAIを使い、「あなた」らしさを発信すること
- 研究開発
- Discord上でのモデルの利用
- プロンプトエンジニアリング
- ファインチューニング(追加学習とも)
- DreamBooth など
- 他のモデルとのマージ
- 本モデルの性能をFIDなどで調べること
- 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること
- 教育
- 美大生や専門学校生の卒業制作
- 大学生の卒業論文や課題制作
- 先生が画像生成AIの現状を伝えること
- Hugging Face の Community にかいてある用途
- 日本語か英語で質問してください
#### 想定されない用途
- 物事を事実として表現するようなこと
- 先生を困らせるようなこと
- その他、創作業界に悪影響を及ぼすこと
# 使用してはいけない用途や悪意のある用途
- マネー・ロンダリングに用いないでください
- デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ)
- 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ)
- わいせつ物を頒布しないでください (刑法175条に違反するおそれ)
- いわゆる業界のマナーを守らないようなこと
- 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ)
- フェイクニュース
## モデルの限界やバイアス
### モデルの限界
- 拡散モデルや大規模言語モデルは、いまだに未知の部分が多く、その限界は判明していない。
### バイアス
- 拡散モデルや大規模言語モデルは、いまだに未知の部分が多く、バイアスは判明していない。
## 学習
**学習データ**
- Stable Diffusionと同様のデータセットからDanbooruの無断転載画像を取り除いて手動で集めた約2000枚の画像
- Stable Diffusionと同様のデータセットからDanbooruの無断転載画像を取り除いて自動で集めた約50万枚の画像
**学習プロセス**
- **ハードウェア:** H100
## 評価結果
第三者による評価を求めています。
## 環境への影響
- **ハードウェアタイプ:** H100
- **使用時間(単位は時間):** 500
- **学習した場所:** 日本
## 参考文献
```bibtex
@misc{podell2023sdxl,
title={SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis},
author={Dustin Podell and Zion English and Kyle Lacey and Andreas Blattmann and Tim Dockhorn and Jonas Müller and Joe Penna and Robin Rombach},
year={2023},
eprint={2307.01952},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
sai17/cards-top_left_swin-tiny-patch4-window7-224-finetuned-dough_100_epoch | sai17 | "2024-03-08T05:22:29Z" | 220,122 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-04T04:55:24Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cards-top_left_swin-tiny-patch4-window7-224-finetuned-dough_100_epoch
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5815593903514297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards-top_left_swin-tiny-patch4-window7-224-finetuned-dough_100_epoch
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0369
- Accuracy: 0.5816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 1.2196 | 1.0 | 1240 | 0.5816 | 1.0369 |
| 1.2491 | 2.0 | 2481 | 0.5752 | 1.0638 |
| 1.2016 | 3.0 | 3721 | 0.5792 | 1.0546 |
| 1.2234 | 4.0 | 4962 | 0.5810 | 1.0560 |
| 1.2298 | 5.0 | 6202 | 0.5725 | 1.0795 |
| 1.287 | 6.0 | 7443 | 0.5731 | 1.0763 |
| 1.2472 | 7.0 | 8683 | 0.5635 | 1.1067 |
| 1.2171 | 8.0 | 9924 | 0.5775 | 1.0671 |
| 1.3164 | 9.0 | 11164 | 0.5701 | 1.0681 |
| 1.3019 | 10.0 | 12405 | 0.5698 | 1.0824 |
| 1.2977 | 11.0 | 13645 | 0.5694 | 1.0721 |
| 1.2587 | 12.0 | 14886 | 0.5704 | 1.0833 |
| 1.2704 | 13.0 | 16126 | 0.5675 | 1.0934 |
| 1.2604 | 14.0 | 17367 | 0.5730 | 1.0739 |
| 1.2834 | 15.0 | 18607 | 0.5524 | 1.1210 |
| 1.2082 | 16.0 | 19848 | 0.5611 | 1.1271 |
| 1.2307 | 17.0 | 21088 | 0.5720 | 1.1013 |
| 1.2136 | 18.0 | 22329 | 0.5753 | 1.1036 |
| 1.2133 | 19.0 | 23569 | 0.5610 | 1.1350 |
| 1.2478 | 20.0 | 24810 | 0.5676 | 1.1256 |
| 1.2006 | 21.0 | 26050 | 0.5682 | 1.1288 |
| 1.1934 | 22.0 | 27291 | 0.5619 | 1.1472 |
| 1.2136 | 23.0 | 28531 | 0.5713 | 1.1304 |
| 1.2449 | 24.0 | 29772 | 0.5581 | 1.1893 |
| 1.1968 | 25.0 | 31012 | 0.5633 | 1.1754 |
| 1.1582 | 26.0 | 32253 | 0.5651 | 1.1735 |
| 1.1404 | 27.0 | 33493 | 0.5642 | 1.1752 |
| 1.2011 | 28.0 | 34734 | 0.5538 | 1.2227 |
| 1.1223 | 29.0 | 35974 | 0.5578 | 1.2200 |
| 1.1427 | 30.0 | 37215 | 0.5608 | 1.2028 |
| 1.1751 | 31.0 | 38455 | 0.5635 | 1.2253 |
| 1.1012 | 32.0 | 39696 | 0.5543 | 1.2473 |
| 1.0912 | 33.0 | 40936 | 0.5673 | 1.2370 |
| 1.1085 | 34.0 | 42177 | 0.5534 | 1.2838 |
| 1.099 | 35.0 | 43417 | 0.5526 | 1.2760 |
| 1.1092 | 36.0 | 44658 | 0.5547 | 1.2769 |
| 1.0655 | 37.0 | 45898 | 0.5534 | 1.3178 |
| 1.0861 | 38.0 | 47139 | 0.5585 | 1.2943 |
| 1.0917 | 39.0 | 48379 | 0.5518 | 1.3659 |
| 1.0791 | 40.0 | 49620 | 0.5541 | 1.3413 |
| 1.0356 | 41.0 | 50860 | 0.5495 | 1.3567 |
| 1.0394 | 42.0 | 52101 | 0.5491 | 1.3648 |
| 1.0096 | 43.0 | 53341 | 0.5574 | 1.3671 |
| 1.0736 | 44.0 | 54582 | 0.5468 | 1.4142 |
| 1.0145 | 45.0 | 55822 | 0.5462 | 1.4340 |
| 1.0437 | 46.0 | 57063 | 0.5442 | 1.4734 |
| 0.9771 | 47.0 | 58303 | 0.5446 | 1.4496 |
| 0.9758 | 48.0 | 59544 | 0.5397 | 1.5071 |
| 1.0199 | 49.0 | 60784 | 0.5437 | 1.5119 |
| 0.9898 | 50.0 | 62025 | 0.5428 | 1.5066 |
| 1.0139 | 51.0 | 63265 | 0.5375 | 1.5314 |
| 1.0035 | 52.0 | 64506 | 0.5427 | 1.5604 |
| 0.9786 | 53.0 | 65746 | 0.5396 | 1.5899 |
| 0.9768 | 54.0 | 66987 | 0.5449 | 1.5642 |
| 0.968 | 55.0 | 68227 | 0.5394 | 1.6056 |
| 0.9254 | 56.0 | 69468 | 0.5380 | 1.6091 |
| 0.9764 | 57.0 | 70680 | 0.5340 | 1.6646 |
| 0.8998 | 58.0 | 71921 | 0.5323 | 1.6692 |
| 0.9592 | 59.0 | 73161 | 0.5353 | 1.6395 |
| 0.8722 | 60.0 | 74402 | 0.5393 | 1.6702 |
| 0.888 | 61.0 | 75642 | 0.5336 | 1.6771 |
| 0.872 | 62.0 | 76883 | 0.5331 | 1.6873 |
| 0.9133 | 63.0 | 78123 | 0.5325 | 1.7182 |
| 0.8815 | 64.0 | 79364 | 0.5310 | 1.7375 |
| 0.9144 | 65.0 | 80604 | 0.5337 | 1.7263 |
| 0.8712 | 66.0 | 81845 | 0.5284 | 1.7628 |
| 0.8576 | 67.0 | 83080 | 1.7786 | 0.5322 |
| 0.8677 | 68.0 | 84321 | 1.7947 | 0.5327 |
| 0.8448 | 69.0 | 85561 | 1.8100 | 0.5314 |
| 0.8102 | 70.0 | 86802 | 1.8256 | 0.5313 |
| 0.8438 | 71.0 | 88042 | 1.8325 | 0.5273 |
| 0.8015 | 72.0 | 89283 | 1.8564 | 0.5311 |
| 0.8025 | 73.0 | 90523 | 1.8451 | 0.5342 |
| 0.8295 | 74.0 | 91764 | 1.8748 | 0.5305 |
| 0.8101 | 75.0 | 93004 | 1.8884 | 0.5297 |
| 0.7883 | 76.0 | 94245 | 1.8777 | 0.5297 |
| 0.7989 | 77.0 | 95485 | 1.9185 | 0.5262 |
| 0.7791 | 78.0 | 96726 | 1.9436 | 0.5246 |
| 0.7197 | 79.0 | 97966 | 1.9615 | 0.5222 |
| 0.7639 | 80.0 | 99207 | 1.9567 | 0.5213 |
| 0.7922 | 81.0 | 100447 | 1.9746 | 0.5248 |
| 0.7874 | 82.0 | 101688 | 1.9960 | 0.5206 |
| 0.8155 | 83.0 | 102928 | 2.0131 | 0.5211 |
| 0.7791 | 84.0 | 104169 | 2.0559 | 0.5196 |
| 0.7731 | 85.0 | 105409 | 2.0255 | 0.5192 |
| 0.8018 | 86.0 | 106650 | 2.0784 | 0.5216 |
| 0.777 | 87.0 | 107890 | 2.0482 | 0.5224 |
| 0.7637 | 88.0 | 109131 | 2.0889 | 0.5201 |
| 0.7783 | 89.0 | 110371 | 2.0663 | 0.5222 |
| 0.7156 | 90.0 | 111612 | 2.0884 | 0.5200 |
| 0.702 | 91.0 | 112852 | 2.1034 | 0.5215 |
| 0.7136 | 92.0 | 114093 | 2.1380 | 0.5164 |
| 0.6889 | 93.0 | 115333 | 2.1321 | 0.5198 |
| 0.7117 | 94.0 | 116574 | 2.1175 | 0.5186 |
| 0.6903 | 95.0 | 117814 | 2.1155 | 0.5187 |
| 0.7334 | 96.0 | 119055 | 2.1197 | 0.5200 |
| 0.6684 | 97.0 | 120295 | 2.1435 | 0.5192 |
| 0.7471 | 98.0 | 121536 | 2.1403 | 0.5196 |
| 0.7197 | 99.0 | 122776 | 2.1465 | 0.5182 |
| 0.7026 | 99.99 | 124000 | 2.1492 | 0.5186 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
MaziyarPanahi/Qwen2-1.5B-Instruct-GGUF | MaziyarPanahi | "2024-06-06T19:06:35Z" | 217,610 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"llama-3",
"llama",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-06T18:59:09Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- llama-3
- llama
- text-generation
model_name: Qwen2-1.5B-Instruct-GGUF
base_model: Qwen/Qwen2-1.5B-Instruct
inference: false
model_creator: Qwen
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Qwen2-1.5B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-1.5B-Instruct-GGUF)
- Model creator: [Qwen](https://huggingface.co/Qwen)
- Original model: [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct)
## Description
[MaziyarPanahi/Qwen2-1.5B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-1.5B-Instruct-GGUF) contains GGUF format model files for [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
echarlaix/tiny-random-mistral | echarlaix | "2023-10-06T09:06:13Z" | 216,477 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-06T08:53:48Z" | ---
license: apache-2.0
---
|
ElKulako/cryptobert | ElKulako | "2024-01-31T14:40:37Z" | 216,108 | 88 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"cryptocurrency",
"crypto",
"BERT",
"sentiment classification",
"NLP",
"bitcoin",
"ethereum",
"shib",
"social media",
"sentiment analysis",
"cryptocurrency sentiment analysis",
"en",
"dataset:ElKulako/stocktwits-crypto",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-20T02:29:26Z" | ---
datasets:
- ElKulako/stocktwits-crypto
language:
- en
tags:
- cryptocurrency
- crypto
- BERT
- sentiment classification
- NLP
- bitcoin
- ethereum
- shib
- social media
- sentiment analysis
- cryptocurrency sentiment analysis
---
For academic reference, cite the following paper: https://ieeexplore.ieee.org/document/10223689
# CryptoBERT
CryptoBERT is a pre-trained NLP model to analyse the language and sentiments of cryptocurrency-related social media posts and messages. It was built by further training the [vinai's bertweet-base](https://huggingface.co/vinai/bertweet-base) language model on the cryptocurrency domain, using a corpus of over 3.2M unique cryptocurrency-related social media posts.
(A research paper with more details will follow soon.)
## Classification Training
The model was trained on the following labels: "Bearish" : 0, "Neutral": 1, "Bullish": 2
CryptoBERT's sentiment classification head was fine-tuned on a balanced dataset of 2M labelled StockTwits posts, sampled from [ElKulako/stocktwits-crypto](https://huggingface.co/datasets/ElKulako/stocktwits-crypto).
CryptoBERT was trained with a max sequence length of 128. Technically, it can handle sequences of up to 514 tokens, however, going beyond 128 is not recommended.
# Classification Example
```python
from transformers import TextClassificationPipeline, AutoModelForSequenceClassification, AutoTokenizer
model_name = "ElKulako/cryptobert"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels = 3)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, max_length=64, truncation=True, padding = 'max_length')
# post_1 & post_3 = bullish, post_2 = bearish
post_1 = " see y'all tomorrow and can't wait to see ada in the morning, i wonder what price it is going to be at. 😎🐂🤠💯😴, bitcoin is looking good go for it and flash by that 45k. "
post_2 = " alright racers, it’s a race to the bottom! good luck today and remember there are no losers (minus those who invested in currency nobody really uses) take your marks... are you ready? go!!"
post_3 = " i'm never selling. the whole market can bottom out. i'll continue to hold this dumpster fire until the day i die if i need to."
df_posts = [post_1, post_2, post_3]
preds = pipe(df_posts)
print(preds)
```
```
[{'label': 'Bullish', 'score': 0.8734585642814636}, {'label': 'Bearish', 'score': 0.9889495372772217}, {'label': 'Bullish', 'score': 0.6595883965492249}]
```
## Training Corpus
CryptoBERT was trained on 3.2M social media posts regarding various cryptocurrencies. Only non-duplicate posts of length above 4 words were considered. The following communities were used as sources for our corpora:
(1) StockTwits - 1.875M posts about the top 100 cryptos by trading volume. Posts were collected from the 1st of November 2021 to the 16th of June 2022. [ElKulako/stocktwits-crypto](https://huggingface.co/datasets/ElKulako/stocktwits-crypto)
(2) Telegram - 664K posts from top 5 telegram groups: [Binance](https://t.me/binanceexchange), [Bittrex](https://t.me/BittrexGlobalEnglish), [huobi global](https://t.me/huobiglobalofficial), [Kucoin](https://t.me/Kucoin_Exchange), [OKEx](https://t.me/OKExOfficial_English).
Data from 16.11.2020 to 30.01.2021. Courtesy of [Anton](https://www.kaggle.com/datasets/aagghh/crypto-telegram-groups).
(3) Reddit - 172K comments from various crypto investing threads, collected from May 2021 to May 2022
(4) Twitter - 496K posts with hashtags XBT, Bitcoin or BTC. Collected for May 2018. Courtesy of [Paul](https://www.kaggle.com/datasets/paul92s/bitcoin-tweets-14m). |
databricks/dbrx-instruct | databricks | "2024-04-19T07:33:52Z" | 215,653 | 1,085 | transformers | [
"transformers",
"safetensors",
"dbrx",
"text-generation",
"conversational",
"arxiv:2211.15841",
"arxiv:2304.11277",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-26T20:07:24Z" | ---
extra_gated_heading: You need to share contact information with Databricks to access this model
extra_gated_prompt: >-
### DBRX Terms of Use
Use of DBRX is governed by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and the [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
extra_gated_fields:
First Name: text
Last Name: text
Organization: text
By clicking 'Submit' below, I accept the terms of the license and acknowledge that the information I provide will be collected, stored, processed, and shared in accordance with Databricks' Privacy Notice and I understand I can update my preferences at any time: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed, and shared in accordance with Databricks [Privacy Notice](https://www.databricks.com/legal/privacynotice).
extra_gated_button_content: Submit
inference: false
license: other
license_name: databricks-open-model-license
license_link: https://www.databricks.com/legal/open-model-license
---
# DBRX Instruct
* DBRX Instruct is a mixture-of-experts (MoE) large language model trained from scratch by Databricks. DBRX Instruct specializes in few-turn interactions.
* We are releasing both DBRX Instruct and DBRX Base, the pretrained base model which underlies it, under [an open license](https://www.databricks.com/legal/open-model-license).
* This is the repository for DBRX Instruct. DBRX Base can be found [here](https://huggingface.co/databricks/dbrx-base).
* For full details on the DBRX models, please read our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
## Model Overview
DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction.
It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.
It was pre-trained on 12T tokens of text and code data.
Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.
This provides 65x more possible combinations of experts and we found that this improves model quality.
DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
It uses a converted version of the GPT-4 tokenizer as defined in the [tiktoken](https://github.com/openai/tiktoken) repository.
We made these choices based on exhaustive evaluation and scaling experiments.
DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models.
This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality.
* **Inputs:** DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens.
* **Outputs:** DBRX only produces text-based outputs.
* **Model Architecture:** More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
* **License:** [Databricks Open Model License](https://www.databricks.com/legal/open-model-license)
* **Acceptable Use Policy:** [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model)
* **Version:** 1.0
* **Owner:** Databricks, Inc.
## Usage
These are several general ways to use the DBRX models:
* DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Instruct; DBRX Base can be found [here](https://huggingface.co/databricks/dbrx-base).
* The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
* DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
* For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
## Quickstart Guide
**NOTE: This is DBRX Instruct, and has been instruction finetuned.**
If you are looking for the base model, please use [DBRX Base](https://huggingface.co/databricks/dbrx-base).
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
```bash
pip install "transformers>=4.40.0"
```
If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads).
```bash
pip install hf_transfer
export HF_HUB_ENABLE_HF_TRANSFER=1
```
You will need to request access to this repository to download the model. Once this is granted,
[obtain an access token](https://huggingface.co/docs/hub/en/security-tokens) with `read` permission, and supply the token below.
### Run the model on multiple GPUs:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="hf_YOUR_TOKEN")
model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="hf_YOUR_TOKEN")
input_text = "What does it take to build a great LLM?"
messages = [{"role": "user", "content": input_text}]
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
```
If your GPU system supports [FlashAttention2](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2), you can add `attn_implementation=”flash_attention_2”` as a keyword to `AutoModelForCausalLM.from_pretrained()` to achieve faster inference.
## Limitations and Ethical Considerations
### Training Dataset Limitations
The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023.
The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language.
DBRX does not have multimodal capabilities.
### Associated Risks and Recommendations
All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive.
Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it.
Databricks recommends [using retrieval augmented generation (RAG)](https://www.databricks.com/glossary/retrieval-augmented-generation-rag) in scenarios where accuracy and fidelity are important.
We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain.
## Intended Uses
### Intended Use Cases
The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications.
They can be further fine-tuned for various domain-specific natural language and coding tasks.
DBRX Instruct can be used as an off-the-shelf model for few-turn question answering related to general English-language and coding tasks.
Please review the Associated Risks section above, as well as the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) for further information about permissible uses of DBRX Base and its derivatives.
### Out-of-Scope Use Cases
DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling.
DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
## Training Stack
MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g., [Composer](https://github.com/mosaicml/composer), [Streaming](https://github.com/mosaicml/streaming), [Megablocks](https://github.com/stanford-futuredata/megablocks), and [LLM Foundry](https://github.com/mosaicml/llm-foundry)).
Composer is our core library for large-scale training.
It provides an optimized training loop, easy [checkpointing](https://docs.mosaicml.com/projects/composer/en/latest/trainer/checkpointing.html) and [logging](https://docs.mosaicml.com/projects/composer/en/latest/trainer/logging.html#wood-logging),
[FSDP](https://pytorch.org/docs/stable/fsdp.html)-based [model sharding](https://docs.mosaicml.com/projects/composer/en/latest/notes/distributed_training.html#fullyshardeddataparallel-fsdp),
convenient [abstractions](https://docs.mosaicml.com/projects/composer/en/latest/trainer/time.html), extreme customizability via [callbacks](https://docs.mosaicml.com/projects/composer/en/latest/trainer/callbacks.html), and more.
Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed.
Megablocks is a lightweight library for MoE training. Crucially, it supports “dropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch.
LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience.
DBRX was trained using proprietary optimized versions of the above open source libraries, along with our [LLM training platform](https://www.databricks.com/product/machine-learning/mosaic-ai-training).
## Evaluation
We find that DBRX outperforms established open-source and open-weight base models on the [Databricks Model Gauntlet](https://www.databricks.com/blog/llm-evaluation-for-icl), the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and HumanEval.
The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming.
The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k.
HumanEval measures coding ability.
Full evaluation details can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
## Acknowledgements
The DBRX models were made possible thanks in large part to the open-source community, especially:
* The [MegaBlocks](https://arxiv.org/abs/2211.15841) library, which established a foundation for our MoE implementation.
* [PyTorch FSDP](https://arxiv.org/abs/2304.11277), which we built on for distributed training.
|
HooshvareLab/bert-base-parsbert-uncased | HooshvareLab | "2021-05-18T20:47:21Z" | 215,170 | 26 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2005.12515",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
---
## Introduction
This model is pre-trained on a large Persian corpus with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 2M documents. A large subset of this corpus was crawled manually.
As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpus into a proper format. This process produces more than 40M true sentences.
## Evaluation
ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
### Sentiment Analysis (SA) task
| Dataset | ParsBERT | mBERT | DeepSentiPers |
|:--------------------------:|:---------:|:-----:|:-------------:|
| Digikala User Comments | 81.74* | 80.74 | - |
| SnappFood User Comments | 88.12* | 87.87 | - |
| SentiPers (Multi Class) | 71.11* | - | 69.33 |
| SentiPers (Binary Class) | 92.13* | - | 91.98 |
### Text Classification (TC) task
| Dataset | ParsBERT | mBERT |
|:-----------------:|:--------:|:-----:|
| Digikala Magazine | 93.59* | 90.72 |
| Persian News | 97.19* | 95.79 |
### Named Entity Recognition (NER) task
| Dataset | ParsBERT | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:-------:|:--------:|:--------:|:----------:|:--------------:|:----------:|:----------------:|:------------:|
| PEYMA | 93.10* | 86.64 | - | 90.59 | - | 84.00 | - |
| ARMAN | 98.79* | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
**If you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference**
## How to use
### TensorFlow 2.0
```python
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است."
tokenizer.tokenize(text)
>>> ['ما', 'در', 'هوش', '##واره', 'معتقدیم', 'با', 'انتقال', 'صحیح', 'دانش', 'و', 'اگاهی', '،', 'همه', 'افراد', 'میتوانند', 'از', 'ابزارهای', 'هوشمند', 'استفاده', 'کنند', '.', 'شعار', 'ما', 'هوش', 'مصنوعی', 'برای', 'همه', 'است', '.']
```
### Pytorch
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
```
## NLP Tasks Tutorial
Coming soon stay tuned
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
## Releases
### Release v0.1 (May 27, 2019)
This is the first version of our ParsBERT based on BERT<sub>BASE</sub>
|
facebook/wav2vec2-base | facebook | "2021-12-28T12:44:31Z" | 214,640 | 63 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Base
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
openai/whisper-large-v2 | openai | "2024-02-29T10:57:50Z" | 214,301 | 1,586 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-05T18:42:20Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
Compared to the Whisper large model, the large-v2 model is trained for 2.5x more epochs with added regularization
for improved performance.
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Large on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.0003583080317572
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-large-v2",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
google/mt5-small | google | "2023-09-18T09:35:27Z" | 214,139 | 84 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"mt5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"arxiv:2010.11934",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's mT5](https://github.com/google-research/multilingual-t5)
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
## Abstract
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available. |
microsoft/infoxlm-large | microsoft | "2021-08-04T11:43:05Z" | 212,640 | 10 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2007.07834",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | # InfoXLM
**InfoXLM** (NAACL 2021, [paper](https://arxiv.org/pdf/2007.07834.pdf), [repo](https://github.com/microsoft/unilm/tree/master/infoxlm), [model](https://huggingface.co/microsoft/infoxlm-base)) InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training.
**MD5**
```
05b95b7d977450b364f8ea3269391953 config.json
c19438359fed6d36b0c1bbb107929579 pytorch_model.bin
bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model
eedbd60a7268b9fc45981b849664f747 tokenizer.json
```
**BibTeX**
```
@inproceedings{chi-etal-2021-infoxlm,
title = "{I}nfo{XLM}: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training",
author={Chi, Zewen and Dong, Li and Wei, Furu and Yang, Nan and Singhal, Saksham and Wang, Wenhui and Song, Xia and Mao, Xian-Ling and Huang, Heyan and Zhou, Ming},
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.280",
doi = "10.18653/v1/2021.naacl-main.280",
pages = "3576--3588",}
``` |
facebook/encodec_32khz | facebook | "2023-09-04T16:32:53Z" | 212,599 | 14 | transformers | [
"transformers",
"pytorch",
"safetensors",
"encodec",
"feature-extraction",
"arxiv:2306.05284",
"region:us"
] | feature-extraction | "2023-06-15T12:01:17Z" | ---
inference: false
---
![encodec image](https://github.com/facebookresearch/encodec/raw/2d29d9353c2ff0ab1aeadc6a3d439854ee77da3e/architecture.png)
# Model Card for EnCodec
This model card provides details and information about EnCodec 32kHz, a state-of-the-art real-time audio codec developed by Meta AI.
This EnCodec checkpoint was trained specifically as part of the [MusicGen project](https://huggingface.co/docs/transformers/main/model_doc/musicgen),
and is intended to be used in conjuction with the MusicGen models.
## Model Details
### Model Description
EnCodec is a high-fidelity audio codec leveraging neural networks. It introduces a streaming encoder-decoder architecture with quantized latent space, trained in an end-to-end fashion.
The model simplifies and speeds up training using a single multiscale spectrogram adversary that efficiently reduces artifacts and produces high-quality samples.
It also includes a novel loss balancer mechanism that stabilizes training by decoupling the choice of hyperparameters from the typical scale of the loss.
Additionally, lightweight Transformer models are used to further compress the obtained representation while maintaining real-time performance. This variant of EnCodec is
trained on 20k of music data, consisting of an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music datasets.
- **Developed by:** Meta AI
- **Model type:** Audio Codec
### Model Sources
- **Repository:** [GitHub Repository](https://github.com/facebookresearch/audiocraft)
- **Paper:** [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
EnCodec can be used directly as an audio codec for real-time compression and decompression of audio signals.
It provides high-quality audio compression and efficient decoding. The model was trained on various bandwiths, which can be specified when encoding (compressing) and decoding (decompressing).
Two different setup exist for EnCodec:
- Non-streamable: the input audio is split into chunks of 1 seconds, with an overlap of 10 ms, which are then encoded.
- Streamable: weight normalizationis used on the convolution layers, and the input is not split into chunks but rather padded on the left.
### Downstream Use
This variant of EnCodec is designed to be used in conjunction with the official [MusicGen checkpoints](https://huggingface.co/models?search=facebook/musicgen-).
However, it can also be used standalone to encode audio files.
## How to Get Started with the Model
Use the following code to get started with the EnCodec model using a dummy example from the LibriSpeech dataset (~9MB). First, install the required Python packages:
```
pip install --upgrade pip
pip install --upgrade transformers datasets[audio]
```
Then load an audio sample, and run a forward pass of the model:
```python
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
# load a demonstration datasets
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# load the model + processor (for pre-processing the audio)
model = EncodecModel.from_pretrained("facebook/encodec_48khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_48khz")
# cast the audio data to the correct sampling rate for the model
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[0]["audio"]["array"]
# pre-process the inputs
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
# explicitly encode then decode the audio inputs
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
# or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
```
## Evaluation
For evaluation results, refer to the [MusicGen evaluation scores](https://huggingface.co/facebook/musicgen-large#evaluation-results).
## Summary
EnCodec is a state-of-the-art real-time neural audio compression model that excels in producing high-fidelity audio samples at various sample rates and bandwidths.
The model's performance was evaluated across different settings, ranging from 24kHz monophonic at 1.5 kbps to 48kHz stereophonic, showcasing both subjective and
objective results. Notably, EnCodec incorporates a novel spectrogram-only adversarial loss, effectively reducing artifacts and enhancing sample quality.
Training stability and interpretability were further enhanced through the introduction of a gradient balancer for the loss weights.
Additionally, the study demonstrated that a compact Transformer model can be employed to achieve an additional bandwidth reduction of up to 40% without compromising
quality, particularly in applications where low latency is not critical (e.g., music streaming).
## Citation
**BibTeX:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
``` |
peft-internal-testing/tiny-clip-text-2 | peft-internal-testing | "2024-03-06T10:52:08Z" | 212,322 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"clip_text_model",
"endpoints_compatible",
"region:us"
] | null | "2023-09-20T16:25:32Z" | Entry not found |
SG161222/RealVisXL_V4.0 | SG161222 | "2024-04-12T15:36:22Z" | 211,141 | 102 | diffusers | [
"diffusers",
"safetensors",
"license:openrail++",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-02-12T16:20:30Z" | ---
license: openrail++
---
<b>This model is available on <a href="https://www.mage.space/">Mage.Space</a> (main sponsor)</b><br>
<b>You can support me directly on Boosty - https://boosty.to/sg_161222</b><br>
<b>It's important! Read it!</b><br>
The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.<br>
The model is aimed at photorealism. Can produce sfw and nsfw images of decent quality.<br>
CivitAI Page: https://civitai.com/models/139562/realvisxl-v40-turbo<br>
<b>Recommended Negative Prompt:</b><br>
(face asymmetry, eyes asymmetry, deformed eyes, open mouth)<br>
<b>or another negative prompt</b><br>
<b>Recommended Generation Parameters:</b><br>
Sampling Steps: 25+<br>
Sampling Method: DPM++ 2M Karras<br>
<b>Recommended Hires Fix Parameters:</b><br>
Hires steps: 10+<br>
Upscaler: 4x-UltraSharp upscaler / or another<br>
Denoising strength: 0.1 - 0.5<br>
Upscale by: 1.1-2.0<br> |
LanguageBind/Video-LLaVA-7B | LanguageBind | "2024-04-09T13:32:08Z" | 209,609 | 77 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llava",
"text-generation",
"arxiv:2311.10122",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-17T05:09:17Z" | ---
license: apache-2.0
---
<p align="center">
<img src="https://z1.ax1x.com/2023/11/07/pil4sqH.png" width="150" style="margin-bottom: 0.2;"/>
<p>
<h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Video-LLaVA: Learning United Visual Representation by Alignment Before Projection</a></h2>
<h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for latest update. </h2>
## 📰 News
* **[2024.01.27]** 👀👀👀 Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters.
* **[2024.01.17]** 🔥🔥🔥 Our [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) has been accepted at ICLR 2024!
* **[2024.01.16]** 🔥🔥🔥 We reorganize the code and support LoRA fine-tuning, checking [finetune_lora.sh](scripts/v1_5/finetune_lora.sh).
* **[2023.11.30]** 🤝 Thanks to the generous contributions of the community, the [OpenXLab's demo](https://openxlab.org.cn/apps/detail/houshaowei/Video-LLaVA) is now accessible.
* **[2023.11.23]** We are training a new and powerful model.
* **[2023.11.21]** 🤝 Check out the [replicate demo](https://replicate.com/nateraw/video-llava), created by [@nateraw](https://github.com/nateraw), who has generously supported our research!
* **[2023.11.20]** 🤗 [Hugging Face demo](https://huggingface.co/spaces/LanguageBind/Video-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** 👀 this repository for the latest updates.
## 😮 Highlights
Video-LLaVA exhibits remarkable interactive capabilities between images and videos, despite the absence of image-video pairs in the dataset.
### 💡 Simple baseline, learning united visual representation by alignment before projection
- With **the binding of unified visual representations to the language feature space**, we enable an LLM to perform visual reasoning capabilities on both images and videos simultaneously.
### 🔥 High performance, complementary learning with video and image
- Extensive experiments demonstrate **the complementarity of modalities**, showcasing significant superiority when compared to models specifically designed for either images or videos.
## 🤗 Demo
### Gradio Web UI
Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by Video-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/Video-LLaVA) in Huggingface Spaces.
```bash
python -m videollava.serve.gradio_web_server
```
### CLI Inference
```bash
python -m videollava.serve.cli --model-path "LanguageBind/Video-LLaVA-7B" --file "path/to/your/video.mp4" --load-4bit
```
```bash
python -m videollava.serve.cli --model-path "LanguageBind/Video-LLaVA-7B" --file "path/to/your/image.jpg" --load-4bit
```
## 🛠️ Requirements and Installation
* Python >= 3.10
* Pytorch == 2.0.1
* CUDA Version >= 11.7
* Install required packages:
```bash
git clone https://github.com/PKU-YuanGroup/Video-LLaVA
cd Video-LLaVA
conda create -n videollava python=3.10 -y
conda activate videollava
pip install --upgrade pip # enable PEP 660 support
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
pip install decord opencv-python git+https://github.com/facebookresearch/pytorchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d
```
## 🤖 API
**We open source all codes.** If you want to load the model (e.g. ```LanguageBind/Video-LLaVA-7B```) on local, you can use the following code snippets.
### Inference for image
```python
import torch
from videollava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from videollava.conversation import conv_templates, SeparatorStyle
from videollava.model.builder import load_pretrained_model
from videollava.utils import disable_torch_init
from videollava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
def main():
disable_torch_init()
image = 'videollava/serve/examples/extreme_ironing.jpg'
inp = 'What is unusual about this image?'
model_path = 'LanguageBind/Video-LLaVA-7B'
cache_dir = 'cache_dir'
device = 'cuda'
load_4bit, load_8bit = True, False
model_name = get_model_name_from_path(model_path)
tokenizer, model, processor, _ = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device, cache_dir=cache_dir)
image_processor = processor['image']
conv_mode = "llava_v1"
conv = conv_templates[conv_mode].copy()
roles = conv.roles
image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values']
if type(image_tensor) is list:
tensor = [image.to(model.device, dtype=torch.float16) for image in image_tensor]
else:
tensor = image_tensor.to(model.device, dtype=torch.float16)
print(f"{roles[1]}: {inp}")
inp = DEFAULT_IMAGE_TOKEN + '\n' + inp
conv.append_message(conv.roles[0], inp)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
keywords = [stop_str]
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=tensor,
do_sample=True,
temperature=0.2,
max_new_tokens=1024,
use_cache=True,
stopping_criteria=[stopping_criteria])
outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip()
print(outputs)
if __name__ == '__main__':
main()
```
### Inference for video
```python
import torch
from videollava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from videollava.conversation import conv_templates, SeparatorStyle
from videollava.model.builder import load_pretrained_model
from videollava.utils import disable_torch_init
from videollava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
def main():
disable_torch_init()
video = 'videollava/serve/examples/sample_demo_1.mp4'
inp = 'Why is this video funny?'
model_path = 'LanguageBind/Video-LLaVA-7B'
cache_dir = 'cache_dir'
device = 'cuda'
load_4bit, load_8bit = True, False
model_name = get_model_name_from_path(model_path)
tokenizer, model, processor, _ = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device, cache_dir=cache_dir)
video_processor = processor['video']
conv_mode = "llava_v1"
conv = conv_templates[conv_mode].copy()
roles = conv.roles
video_tensor = video_processor(video, return_tensors='pt')['pixel_values']
if type(video_tensor) is list:
tensor = [video.to(model.device, dtype=torch.float16) for video in video_tensor]
else:
tensor = video_tensor.to(model.device, dtype=torch.float16)
print(f"{roles[1]}: {inp}")
inp = ' '.join([DEFAULT_IMAGE_TOKEN] * model.get_video_tower().config.num_frames) + '\n' + inp
conv.append_message(conv.roles[0], inp)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
keywords = [stop_str]
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=tensor,
do_sample=True,
temperature=0.1,
max_new_tokens=1024,
use_cache=True,
stopping_criteria=[stopping_criteria])
outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip()
print(outputs)
if __name__ == '__main__':
main()
```
## 🗝️ Training & Validating
The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md).
## 👍 Acknowledgement
* [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant.
* [Video-ChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT) Great job contributing the evaluation code and dataset.
## 🙌 Related Projects
* [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework.
* [Chat-UniVi](https://github.com/PKU-YuanGroup/Chat-UniVi) This framework empowers the model to efficiently utilize a limited number of visual tokens.
## 🔒 License
* The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/Video-LLaVA/blob/main/LICENSE) file.
* The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## ✏️ Citation
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.
```BibTeX
@article{lin2023video,
title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection},
author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li},
journal={arXiv preprint arXiv:2311.10122},
year={2023}
}
```
```BibTeX
@article{zhu2023languagebind,
title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment},
author={Zhu, Bin and Lin, Bin and Ning, Munan and Yan, Yang and Cui, Jiaxi and Wang, HongFa and Pang, Yatian and Jiang, Wenhao and Zhang, Junwu and Li, Zongwei and others},
journal={arXiv preprint arXiv:2310.01852},
year={2023}
}
```
<!---->
## ✨ Star History
[![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/Video-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/Video-LLaVA&Date)
## 🤝 Contributors
<a href="https://github.com/PKU-YuanGroup/Video-LLaVA/graphs/contributors">
<img src="https://contrib.rocks/image?repo=PKU-YuanGroup/Video-LLaVA" />
</a>
|
dccuchile/bert-base-spanish-wwm-uncased | dccuchile | "2024-01-18T01:46:43Z" | 208,649 | 55 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"masked-lm",
"es",
"arxiv:1904.09077",
"arxiv:1906.01502",
"arxiv:1812.10464",
"arxiv:1901.07291",
"arxiv:1904.02099",
"arxiv:1906.01569",
"arxiv:1908.11828",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language:
- es
tags:
- masked-lm
---
# BETO: Spanish BERT
BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models.
## Download
| | | | |
|-|:--------:|:-----:|:----:|
|BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) |
|BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) |
All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps.
## Benchmarks
The following table shows some BETO results in the Spanish version of every task.
We compare BETO (cased and uncased) with the Best Multilingual BERT results that
we found in the literature (as of October 2019).
The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods).
References for all methods can be found [here](#references).
|Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results |
|-------|--------------:|--------------:|--------------------------:|-------------------------------:|
|[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] |
|[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] |
|[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] |
|[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] |
|[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]|
## Example of use
For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html).
BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library.
An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1pYOYsCU59GBOwztkWCw5PTsqBiJbRy4S?usp=sharing).
(We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉)
## Acknowledgments
We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/)
that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program.
## Citation
[Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf)
To cite this resource in a publication please use the following:
```
@inproceedings{CaneteCFP2020,
title={Spanish Pre-Trained BERT Model and Evaluation Data},
author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge},
booktitle={PML4DC at ICLR 2020},
year={2020}
}
```
## License Disclaimer
The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs.
## References
* [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md)
* [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf)
* [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf)
* [4] [LASER](https://arxiv.org/abs/1812.10464)
* [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf)
* [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf)
* [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf)
* [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
|
distilbert/distilbert-base-cased-distilled-squad | distilbert | "2024-05-06T13:46:31Z" | 207,983 | 182 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:04Z" | ---
language: en
license: apache-2.0
datasets:
- squad
metrics:
- squad
model-index:
- name: distilbert-base-cased-distilled-squad
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 79.5998
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTViZDA2Y2E2NjUyMjNjYjkzNTUzODc5OTk2OTNkYjQxMDRmMDhlYjdmYWJjYWQ2N2RlNzY1YmI3OWY1NmRhOSIsInZlcnNpb24iOjF9.ZJHhboAMwsi3pqU-B-XKRCYP_tzpCRb8pEjGr2Oc-TteZeoWHI8CXcpDxugfC3f7d_oBcKWLzh3CClQxBW1iAQ
- type: f1
value: 86.9965
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWZlMzY2MmE1NDNhOGNjNWRmODg0YjQ2Zjk5MjUzZDQ2MDYxOTBlMTNhNzQ4NTA2NjRmNDU3MGIzMTYwMmUyOSIsInZlcnNpb24iOjF9.z0ZDir87aT7UEmUeDm8Uw0oUdAqzlBz343gwnsQP3YLfGsaHe-jGlhco0Z7ISUd9NokyCiJCRc4NNxJQ83IuCw
---
# DistilBERT base cased distilled SQuAD
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad).
- **Developed by:** Hugging Face
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** Apache 2.0
- **Related Models:** [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased)
- **Resources for more information:**
- See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model)
- See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
>>> context = r"""
... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
... """
>>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'SQuAD dataset', score: 0.5152, start: 147, end: 160
```
Here is how to use this model in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
import torch
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased-distilled-squad')
model = DistilBertModel.from_pretrained('distilbert-base-cased-distilled-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
print(outputs)
```
And in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
```
## Uses
This model can be used for question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
>>> context = r"""
... Alice is sitting on the bench. Bob is sitting next to her.
... """
>>> result = question_answerer(question="Who is the CEO?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'Bob', score: 0.7527, start: 32, end: 35
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The [distilbert-base-cased model](https://huggingface.co/distilbert-base-cased) was trained using the same data as the [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased). The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as:
> DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad).
#### Training Procedure
##### Preprocessing
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
##### Pretraining
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
## Evaluation
As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
> This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
- **Hardware Type:** 8 16GB V100 GPUs
- **Hours used:** 90 hours
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
APA:
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
## Model Card Authors
This model card was written by the Hugging Face team.
|
bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF | bartowski | "2024-06-24T19:17:34Z" | 205,905 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Not-for-all-Audiences",
"text-generation",
"base_model:bosonai/Higgs-Llama-3-70B",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:turboderp/Cat-Llama-3-70B-instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-24T16:58:41Z" | ---
base_model:
- bosonai/Higgs-Llama-3-70B
- abacusai/Smaug-Llama-3-70B-Instruct-32K
- Sao10K/L3-70B-Euryale-v2.1
- abacusai/Smaug-Llama-3-70B-Instruct
- turboderp/Cat-Llama-3-70B-instruct
library_name: transformers
tags:
- mergekit
- merge
- Not-for-all-Audiences
license: llama3
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of New-Dawn-Llama-3-70B-32K-v1.0
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3197">b3197</a> for quantization.
Original model: https://huggingface.co/sophosympatheia/New-Dawn-Llama-3-70B-32K-v1.0
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q8_0.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/tree/main/New-Dawn-Llama-3-70B-32K-v1.0-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q6_K.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/tree/main/New-Dawn-Llama-3-70B-32K-v1.0-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q5_K_L.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/tree/main/New-Dawn-Llama-3-70B-32K-v1.0-Q5_K_L.gguf) | Q5_K_L | 52.56GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q5_K_M.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_L.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_L.gguf) | Q4_K_L | 45.27GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [New-Dawn-Llama-3-70B-32K-v1.0-IQ4_XS.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q3_K_M.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
| [New-Dawn-Llama-3-70B-32K-v1.0-IQ3_M.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q3_K_S.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
| [New-Dawn-Llama-3-70B-32K-v1.0-IQ3_XXS.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [New-Dawn-Llama-3-70B-32K-v1.0-Q2_K.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
| [New-Dawn-Llama-3-70B-32K-v1.0-IQ2_M.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [New-Dawn-Llama-3-70B-32K-v1.0-IQ2_XS.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Lower quality, uses SOTA techniques to be usable. |
| [New-Dawn-Llama-3-70B-32K-v1.0-IQ2_XXS.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. |
| [New-Dawn-Llama-3-70B-32K-v1.0-IQ1_M.gguf](https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/blob/main/New-Dawn-Llama-3-70B-32K-v1.0-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF --include "New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF --include "New-Dawn-Llama-3-70B-32K-v1.0-Q8_0.gguf/*" --local-dir New-Dawn-Llama-3-70B-32K-v1.0-Q8_0
```
You can either specify a new local-dir (New-Dawn-Llama-3-70B-32K-v1.0-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
benjamin/gpt2-wechsel-german | benjamin | "2022-07-13T23:44:00Z" | 205,440 | 4 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: de
license: mit
---
# gpt2-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
jbetker/wav2vec2-large-robust-ft-libritts-voxpopuli | jbetker | "2022-02-25T19:07:57Z" | 205,354 | 8 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | This checkpoint is a wav2vec2-large model that is useful for generating transcriptions with punctuation. It is intended for use in building transcriptions for TTS models, where punctuation is very important for prosody.
This model was created by fine-tuning the `facebook/wav2vec2-large-robust-ft-libri-960h` checkpoint on the [libritts](https://research.google/tools/datasets/libri-tts/) and [voxpopuli](https://github.com/facebookresearch/voxpopuli) datasets with a new vocabulary that includes punctuation.
The model gets a respectable WER of 4.45% on the librispeech validation set. The baseline, `facebook/wav2vec2-large-robust-ft-libri-960h`, got 4.3%.
Since the model was fine-tuned on clean audio, it is not well-suited for noisy audio like CommonVoice (though I may upload a checkpoint for that soon too). It still does pretty good, though.
The vocabulary is uploaded to the model hub as well `jbetker/tacotron_symbols`.
Check out my speech transcription script repo, [ocotillo](https://github.com/neonbjb/ocotillo) for usage examples: https://github.com/neonbjb/ocotillo |
Helsinki-NLP/opus-mt-it-en | Helsinki-NLP | "2023-08-16T11:58:49Z" | 203,328 | 15 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-it-en
* source languages: it
* target languages: en
* OPUS readme: [it-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.it.en | 35.3 | 0.600 |
| newstest2009.it.en | 34.0 | 0.594 |
| Tatoeba.it.en | 70.9 | 0.808 |
|
openai/whisper-medium | openai | "2024-02-29T10:57:42Z" | 202,835 | 180 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-09-26T06:52:52Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 2.9
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 5.9
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 53.87
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Medium on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
2.900409225488902
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
facebook/sam-vit-huge | facebook | "2024-01-11T19:23:32Z" | 202,086 | 107 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"sam",
"mask-generation",
"vision",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | mask-generation | "2023-04-10T13:51:24Z" | ---
license: apache-2.0
tags:
- vision
---
# Model Card for Segment Anything Model (SAM) - ViT Huge (ViT-H) version
<p>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-architecture.png" alt="Model architecture">
<em> Detailed architecture of Segment Anything Model (SAM).</em>
</p>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
# TL;DR
[Link to original repository](https://github.com/facebookresearch/segment-anything)
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-beancans.png" alt="Snow" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-dog-masks.png" alt="Forest" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car-seg.png" alt="Mountains" width="600" height="600"> |
|---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
The abstract of the paper states:
> We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything).
# Model Details
The SAM model is made up of 3 modules:
- The `VisionEncoder`: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used.
- The `PromptEncoder`: generates embeddings for points and bounding boxes
- The `MaskDecoder`: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed
- The `Neck`: predicts the output masks based on the contextualized masks produced by the `MaskDecoder`.
# Usage
## Prompted-Mask-Generation
```python
from PIL import Image
import requests
from transformers import SamModel, SamProcessor
model = SamModel.from_pretrained("facebook/sam-vit-huge")
processor = SamProcessor.from_pretrained("facebook/sam-vit-huge")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D localization of a window
```
```python
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda")
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
scores = outputs.iou_scores
```
Among other arguments to generate masks, you can pass 2D locations on the approximate position of your object of interest, a bounding box wrapping the object of interest (the format should be x, y coordinate of the top right and bottom left point of the bounding box), a segmentation mask. At this time of writing, passing a text as input is not supported by the official model according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844).
For more details, refer to this notebook, which shows a walk throught of how to use the model, with a visual example!
## Automatic-Mask-Generation
The model can be used for generating segmentation masks in a "zero-shot" fashion, given an input image. The model is automatically prompt with a grid of `1024` points
which are all fed to the model.
The pipeline is made for automatic mask generation. The following snippet demonstrates how easy you can run it (on any device! Simply feed the appropriate `points_per_batch` argument)
```python
from transformers import pipeline
generator = pipeline("mask-generation", device = 0, points_per_batch = 256)
image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
outputs = generator(image_url, points_per_batch = 256)
```
Now to display the image:
```python
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
def show_mask(mask, ax, random_color=False):
if random_color:
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
else:
color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])
h, w = mask.shape[-2:]
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
ax.imshow(mask_image)
plt.imshow(np.array(raw_image))
ax = plt.gca()
for mask in outputs["masks"]:
show_mask(mask, ax=ax, random_color=True)
plt.axis("off")
plt.show()
```
This should give you the following ![car_mask_results](https://user-images.githubusercontent.com/48595927/233065719-abb53407-8693-4203-8323-63fbb6321615.png)
# Citation
If you use this model, please use the following BibTeX entry.
```
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
``` |
ByteDance/Hyper-SD | ByteDance | "2024-05-13T13:00:08Z" | 201,907 | 558 | diffusers | [
"diffusers",
"lora",
"text-to-image",
"stable-diffusion",
"arxiv:2404.13686",
"license:openrail++",
"region:us"
] | text-to-image | "2024-04-20T06:34:54Z" | ---
license: openrail++
library_name: diffusers
inference: false
tags:
- lora
- text-to-image
- stable-diffusion
---
# Hyper-SD
Official Repository of the paper: *[Hyper-SD](https://arxiv.org/abs/2404.13686)*.
Project Page: https://hyper-sd.github.io/
![](./hypersd_tearser.jpg)
## News🔥🔥🔥
* May.13, 2024. 💥💥💥 The **12-Steps CFG-Preserved** [Hyper-SDXL-12steps-CFG-LoRA](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SDXL-12steps-CFG-lora.safetensors) and [Hyper-SD15-12steps-CFG-LoRA](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SD15-12steps-CFG-lora.safetensors) is also available now(support 5~8 guidance scales), this could be more practical with better trade-off between performance and speed. Enjoy! 💥💥💥
* Apr.30, 2024. Our **8-Steps CFG-Preserved** [Hyper-SDXL-8steps-CFG-LoRA](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SDXL-8steps-CFG-lora.safetensors) and [Hyper-SD15-8steps-CFG-LoRA](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SD15-8steps-CFG-lora.safetensors) is available now(support 5~8 guidance scales), we strongly recommend making the 8-step CFGLora a standard configuration for all SDXL and SD15 models!!!
* Apr.28, 2024. ComfyUI workflows on 1-Step Unified LoRA 🥰 with TCDScheduler to inference on different steps are [released](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui)! Remember to install ⭕️ [ComfyUI-TCD](https://github.com/JettHu/ComfyUI-TCD) in your `ComfyUI/custom_nodes` folder!!! You're encouraged to adjust the eta parameter to get better results 🌟!
* Apr.26, 2024. Thanks to @[Pete](https://huggingface.co/pngwn) for contributing to our [scribble demo](https://huggingface.co/spaces/ByteDance/Hyper-SD15-Scribble) with larger canvas right now 👏.
* Apr.24, 2024. The ComfyUI [workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-1step-Unet-workflow.json) and [checkpoint](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors) on 1-Step SDXL UNet ✨ is also available! Don't forget ⭕️ to install the custom [scheduler](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui/ComfyUI-HyperSDXL1StepUnetScheduler) in your `ComfyUI/custom_nodes` folder!!!
* Apr.23, 2024. ComfyUI workflows on N-Steps LoRAs are [released](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui)! Worth a try for creators 💥!
* Apr.23, 2024. Our technical report 📚 is uploaded to [arXiv](https://arxiv.org/abs/2404.13686)! Many implementation details are provided and we welcome more discussions👏.
* Apr.21, 2024. Hyper-SD ⚡️ is highly compatible and work well with different base models and controlnets. To clarify, we also append the usage example of controlnet [here](https://huggingface.co/ByteDance/Hyper-SD#controlnet-usage).
* Apr.20, 2024. Our checkpoints and two demos 🤗 (i.e. [SD15-Scribble](https://huggingface.co/spaces/ByteDance/Hyper-SD15-Scribble) and [SDXL-T2I](https://huggingface.co/spaces/ByteDance/Hyper-SDXL-1Step-T2I)) are publicly available on [HuggingFace Repo](https://huggingface.co/ByteDance/Hyper-SD).
## Try our Hugging Face demos:
Hyper-SD Scribble demo host on [🤗 scribble](https://huggingface.co/spaces/ByteDance/Hyper-SD15-Scribble)
Hyper-SDXL One-step Text-to-Image demo host on [🤗 T2I](https://huggingface.co/spaces/ByteDance/Hyper-SDXL-1Step-T2I)
## Introduction
Hyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques.
In this repository, we release the models distilled from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and [Stable-Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)。
## Checkpoints
* `Hyper-SDXL-Nstep-lora.safetensors`: Lora checkpoint, for SDXL-related models.
* `Hyper-SD15-Nstep-lora.safetensors`: Lora checkpoint, for SD1.5-related models.
* `Hyper-SDXL-1step-unet.safetensors`: Unet checkpoint distilled from SDXL-Base.
## Text-to-Image Usage
### SDXL-related models
#### 2-Steps, 4-Steps, 8-steps LoRA
Take the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting.
```python
import torch
from diffusers import DiffusionPipeline, DDIMScheduler
from huggingface_hub import hf_hub_download
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
repo_name = "ByteDance/Hyper-SD"
# Take 2-steps lora as an example
ckpt_name = "Hyper-SDXL-2steps-lora.safetensors"
# Load model.
pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda")
pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name))
pipe.fuse_lora()
# Ensure ddim scheduler timestep spacing set as trailing !!!
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
# lower eta results in more detail
prompt="a photo of a cat"
image=pipe(prompt=prompt, num_inference_steps=2, guidance_scale=0).images[0]
```
#### Unified LoRA (support 1 to 8 steps inference)
You can flexibly adjust the number of inference steps and eta value to achieve best performance.
```python
import torch
from diffusers import DiffusionPipeline, TCDScheduler
from huggingface_hub import hf_hub_download
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
repo_name = "ByteDance/Hyper-SD"
ckpt_name = "Hyper-SDXL-1step-lora.safetensors"
# Load model.
pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda")
pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name))
pipe.fuse_lora()
# Use TCD scheduler to achieve better image quality
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
# Lower eta results in more detail for multi-steps inference
eta=1.0
prompt="a photo of a cat"
image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, eta=eta).images[0]
```
#### 1-step SDXL Unet
Only for the single step inference.
```python
import torch
from diffusers import DiffusionPipeline, UNet2DConditionModel, LCMScheduler
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
repo_name = "ByteDance/Hyper-SD"
ckpt_name = "Hyper-SDXL-1step-Unet.safetensors"
# Load model.
unet = UNet2DConditionModel.from_config(base_model_id, subfolder="unet").to("cuda", torch.float16)
unet.load_state_dict(load_file(hf_hub_download(repo_name, ckpt_name), device="cuda"))
pipe = DiffusionPipeline.from_pretrained(base_model_id, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda")
# Use LCM scheduler instead of ddim scheduler to support specific timestep number inputs
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# Set start timesteps to 800 in the one-step inference to get better results
prompt="a photo of a cat"
image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, timesteps=[800]).images[0]
```
### SD1.5-related models
#### 2-Steps, 4-Steps, 8-steps LoRA
Take the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting.
```python
import torch
from diffusers import DiffusionPipeline, DDIMScheduler
from huggingface_hub import hf_hub_download
base_model_id = "runwayml/stable-diffusion-v1-5"
repo_name = "ByteDance/Hyper-SD"
# Take 2-steps lora as an example
ckpt_name = "Hyper-SD15-2steps-lora.safetensors"
# Load model.
pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda")
pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name))
pipe.fuse_lora()
# Ensure ddim scheduler timestep spacing set as trailing !!!
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
prompt="a photo of a cat"
image=pipe(prompt=prompt, num_inference_steps=2, guidance_scale=0).images[0]
```
#### Unified LoRA (support 1 to 8 steps inference)
You can flexibly adjust the number of inference steps and eta value to achieve best performance.
```python
import torch
from diffusers import DiffusionPipeline, TCDScheduler
from huggingface_hub import hf_hub_download
base_model_id = "runwayml/stable-diffusion-v1-5"
repo_name = "ByteDance/Hyper-SD"
ckpt_name = "Hyper-SD15-1step-lora.safetensors"
# Load model.
pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda")
pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name))
pipe.fuse_lora()
# Use TCD scheduler to achieve better image quality
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
# Lower eta results in more detail for multi-steps inference
eta=1.0
prompt="a photo of a cat"
image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, eta=eta).images[0]
```
## ControlNet Usage
### SDXL-related models
#### 2-Steps, 4-Steps, 8-steps LoRA
Take Canny Controlnet and 2-steps inference as an example:
```python
import torch
from diffusers.utils import load_image
import numpy as np
import cv2
from PIL import Image
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL, DDIMScheduler
from huggingface_hub import hf_hub_download
# Load original image
image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")
image = np.array(image)
# Prepare Canny Control Image
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
control_image = Image.fromarray(image)
control_image.save("control.png")
control_weight = 0.5 # recommended for good generalization
# Initialize pipeline
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0",
torch_dtype=torch.float16
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16).to("cuda")
pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SDXL-2steps-lora.safetensors"))
# Ensure ddim scheduler timestep spacing set as trailing !!!
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
pipe.fuse_lora()
image = pipe("A chocolate cookie", num_inference_steps=2, image=control_image, guidance_scale=0, controlnet_conditioning_scale=control_weight).images[0]
image.save('image_out.png')
```
#### Unified LoRA (support 1 to 8 steps inference)
Take Canny Controlnet as an example:
```python
import torch
from diffusers.utils import load_image
import numpy as np
import cv2
from PIL import Image
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL, TCDScheduler
from huggingface_hub import hf_hub_download
# Load original image
image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")
image = np.array(image)
# Prepare Canny Control Image
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
control_image = Image.fromarray(image)
control_image.save("control.png")
control_weight = 0.5 # recommended for good generalization
# Initialize pipeline
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0",
torch_dtype=torch.float16
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet, vae=vae, torch_dtype=torch.float16).to("cuda")
# Load Hyper-SD15-1step lora
pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SDXL-1step-lora.safetensors"))
pipe.fuse_lora()
# Use TCD scheduler to achieve better image quality
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
# Lower eta results in more detail for multi-steps inference
eta=1.0
image = pipe("A chocolate cookie", num_inference_steps=4, image=control_image, guidance_scale=0, controlnet_conditioning_scale=control_weight, eta=eta).images[0]
image.save('image_out.png')
```
### SD1.5-related models
#### 2-Steps, 4-Steps, 8-steps LoRA
Take Canny Controlnet and 2-steps inference as an example:
```python
import torch
from diffusers.utils import load_image
import numpy as np
import cv2
from PIL import Image
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline, DDIMScheduler
from huggingface_hub import hf_hub_download
controlnet_checkpoint = "lllyasviel/control_v11p_sd15_canny"
# Load original image
image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/input.png")
image = np.array(image)
# Prepare Canny Control Image
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
control_image = Image.fromarray(image)
control_image.save("control.png")
# Initialize pipeline
controlnet = ControlNetModel.from_pretrained(controlnet_checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16).to("cuda")
pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SD15-2steps-lora.safetensors"))
pipe.fuse_lora()
# Ensure ddim scheduler timestep spacing set as trailing !!!
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
image = pipe("a blue paradise bird in the jungle", num_inference_steps=2, image=control_image, guidance_scale=0).images[0]
image.save('image_out.png')
```
#### Unified LoRA (support 1 to 8 steps inference)
Take Canny Controlnet as an example:
```python
import torch
from diffusers.utils import load_image
import numpy as np
import cv2
from PIL import Image
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline, TCDScheduler
from huggingface_hub import hf_hub_download
controlnet_checkpoint = "lllyasviel/control_v11p_sd15_canny"
# Load original image
image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/input.png")
image = np.array(image)
# Prepare Canny Control Image
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
control_image = Image.fromarray(image)
control_image.save("control.png")
# Initialize pipeline
controlnet = ControlNetModel.from_pretrained(controlnet_checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16).to("cuda")
# Load Hyper-SD15-1step lora
pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SD15-1step-lora.safetensors"))
pipe.fuse_lora()
# Use TCD scheduler to achieve better image quality
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
# Lower eta results in more detail for multi-steps inference
eta=1.0
image = pipe("a blue paradise bird in the jungle", num_inference_steps=1, image=control_image, guidance_scale=0, eta=eta).images[0]
image.save('image_out.png')
```
## Comfyui Usage
* `Hyper-SDXL-Nsteps-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-Nsteps-lora-workflow.json)
* `Hyper-SD15-Nsteps-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SD15-Nsteps-lora-workflow.json)
* `Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-1step-Unet-workflow.json)
* **REQUIREMENT / INSTALL** for 1-Step SDXL UNet: Please install our [scheduler folder](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui/ComfyUI-HyperSDXL1StepUnetScheduler) into your `ComfyUI/custom_nodes` to enable sampling from 800 timestep instead of 999.
* i.e. making sure the `ComfyUI/custom_nodes/ComfyUI-HyperSDXL1StepUnetScheduler` folder exist.
* For more details, please refer to our [technical report](https://arxiv.org/abs/2404.13686).
* `Hyper-SD15-1step-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SD15-1step-unified-lora-workflow.json)
* `Hyper-SDXL-1step-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-1step-unified-lora-workflow.json)
* **REQUIREMENT / INSTALL** for 1-Step Unified LoRAs: Please install the [ComfyUI-TCD](https://github.com/JettHu/ComfyUI-TCD) into your `ComfyUI/custom_nodes` to enable TCDScheduler with support of different inference steps (1~8) using single checkpoint.
* i.e. making sure the `ComfyUI/custom_nodes/ComfyUI-TCD` folder exist.
* You're encouraged to adjust the eta parameter in TCDScheduler to get better results.
## Citation
```bibtex
@misc{ren2024hypersd,
title={Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis},
author={Yuxi Ren and Xin Xia and Yanzuo Lu and Jiacheng Zhang and Jie Wu and Pan Xie and Xing Wang and Xuefeng Xiao},
year={2024},
eprint={2404.13686},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR | cambridgeltl | "2023-06-14T19:00:30Z" | 201,661 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:2010.11784",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | ---
language: multilingual
tags:
- biomedical
- lexical-semantics
- cross-lingual
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-XLMR
SapBERT [(Liu et al. 2020)](https://arxiv.org/pdf/2010.11784.pdf) trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AB, using [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) as the base model. Please use [CLS] as the representation of the input.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
```python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext")
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda()
# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
padding="max_length",
max_length=25,
truncation=True,
return_tensors="pt")
toks_cuda = {}
for k,v in toks.items():
toks_cuda[k] = v.cuda()
cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding
all_embs.append(cls_rep.cpu().detach().numpy())
all_embs = np.concatenate(all_embs, axis=0)
```
For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
### Citation
```bibtex
@inproceedings{liu2021learning,
title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={Proceedings of ACL-IJCNLP 2021},
month = aug,
year={2021}
}
``` |
sentence-transformers/paraphrase-albert-small-v2 | sentence-transformers | "2024-03-27T12:15:35Z" | 200,307 | 6 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:s2orc",
"dataset:ms_marco",
"dataset:wiki_atomic_edits",
"dataset:snli",
"dataset:multi_nli",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/coco_captions",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/QQP",
"dataset:yahoo_answers_topics",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- s2orc
- ms_marco
- wiki_atomic_edits
- snli
- multi_nli
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/flickr30k-captions
- embedding-data/coco_captions
- embedding-data/sentence-compression
- embedding-data/QQP
- yahoo_answers_topics
pipeline_tag: sentence-similarity
---
# sentence-transformers/paraphrase-albert-small-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
timm/tf_efficientnet_b7.ns_jft_in1k | timm | "2023-04-27T21:25:31Z" | 199,590 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1905.11946",
"arxiv:1911.04252",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T00:06:04Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnet_b7.ns_jft_in1k
A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 66.3
- GMACs: 38.3
- Activations (M): 289.9
- Image size: 600 x 600
- **Papers:**
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- Self-training with Noisy Student improves ImageNet classification: https://arxiv.org/abs/1911.04252
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnet_b7.ns_jft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b7.ns_jft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 300, 300])
# torch.Size([1, 48, 150, 150])
# torch.Size([1, 80, 75, 75])
# torch.Size([1, 224, 38, 38])
# torch.Size([1, 640, 19, 19])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b7.ns_jft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2560, 19, 19) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@article{Xie2019SelfTrainingWN,
title={Self-Training With Noisy Student Improves ImageNet Classification},
author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019},
pages={10684-10695}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
bigscience/bloom-560m | bigscience | "2023-09-26T09:16:49Z" | 199,481 | 329 | transformers | [
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-05-19T11:51:24Z" | ---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-560m
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.
![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true)
**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 559,214,592 parameters:
* 256,901,120 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1024-dimensional
* Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs)
- Training throughput: About 150 TFLOPs per GPU
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Île-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** bigscience-contact@googlegroups.com |
DeepChem/ChemBERTa-10M-MLM | DeepChem | "2022-01-20T18:01:08Z" | 199,293 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | Entry not found |
bigscience/bloom-7b1 | bigscience | "2024-01-02T18:32:24Z" | 199,149 | 187 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"doi:10.57967/hf/2655",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-05-19T11:53:18Z" | ---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/1634806038075-5df7e9e5da6d0311fd3d53f9.png" alt="BigScience Logo" width="800" style="margin-left:auto; margin-right:auto; display:block"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** bigscience-contact@googlegroups.com
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 7,069,016,064 parameters:
* 1,027,604,480 embedding parameters
* 30 layers, 32 attention heads
* Hidden layers are 4096-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11c-2B5-logs)
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.
![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true)
The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.3
- Validation Loss: 2.9
- Perplexity: 16
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
|
apple/mobilevit-small | apple | "2022-08-29T07:57:51Z" | 199,120 | 39 | transformers | [
"transformers",
"pytorch",
"tf",
"coreml",
"mobilevit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2110.02178",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-05-30T12:43:23Z" | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileViT (small-sized model)
MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-small")
model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes.
## Training procedure
### Preprocessing
Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping.
To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320).
At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256.
Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|------------------|-------------------------|-------------------------|-----------|-------------------------------------------------|
| MobileViT-XXS | 69.0 | 88.9 | 1.3 M | https://huggingface.co/apple/mobilevit-xx-small |
| MobileViT-XS | 74.8 | 92.3 | 2.3 M | https://huggingface.co/apple/mobilevit-x-small |
| **MobileViT-S** | **78.4** | **94.1** | **5.6 M** | https://huggingface.co/apple/mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
|
microsoft/Phi-3-vision-128k-instruct | microsoft | "2024-06-11T00:37:36Z" | 198,919 | 774 | transformers | [
"transformers",
"safetensors",
"phi3_v",
"text-generation",
"nlp",
"code",
"vision",
"conversational",
"custom_code",
"multilingual",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-05-19T15:07:39Z" | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
- vision
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: <|image_1|>Can you describe what you see in the image?
---
## Model Summary
The Phi-3-Vision-128K-Instruct is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/try-phi3vision)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| ------- | ------------- | ------------ |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications with visual and text input capabilities which require
1) memory/compute constrained environments;
2) latency bound scenarios;
3) general image understanding;
4) OCR;
5) chart and table understanding.
Our model is designed to accelerate research on efficient language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.
Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3-Vision-128K-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
numpy==1.24.4
Pillow==10.3.0
Requests==2.31.0
torch==2.3.0
torchvision==0.18.0
transformers==4.40.2
```
Phi-3-Vision-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai).
### Chat Format
Given the nature of the training data, the Phi-3-Vision-128K-Instruct model is best suited for a single image input wih prompts using the chat format as follows.
You can provide the prompt as a single image with a generic template as follow:
```markdown
<|user|>\n<|image_1|>\n{prompt}<|end|>\n<|assistant|>\n
```
where the model generates the text after `<|assistant|>` . In case of multi-turn conversation, the prompt can be formatted as follows:
```markdown
<|user|>\n<|image_1|>\n{prompt_1}<|end|>\n<|assistant|>\n{response_1}<|end|>\n<|user|>\n{prompt_2}<|end|>\n<|assistant|>\n
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
from PIL import Image
import requests
from transformers import AutoModelForCausalLM
from transformers import AutoProcessor
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto", _attn_implementation='flash_attention_2') # use _attn_implementation='eager' to disable flash attention
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
messages = [
{"role": "user", "content": "<|image_1|>\nWhat is shown in this image?"},
{"role": "assistant", "content": "The chart displays the percentage of respondents who agree with various statements about their preparedness for meetings. It shows five categories: 'Having clear and pre-defined goals for meetings', 'Knowing where to find the information I need for a meeting', 'Understanding my exact role and responsibilities when I'm invited', 'Having tools to manage admin tasks like note-taking or summarization', and 'Having more focus time to sufficiently prepare for meetings'. Each category has an associated bar indicating the level of agreement, measured on a scale from 0% to 100%."},
{"role": "user", "content": "Provide insightful questions to spark discussion."}
]
url = "https://assets-c4akfrf5b4d3f4b7.z01.azurefd.net/assets/2024/04/BMDataViz_661fb89f3845e.png"
image = Image.open(requests.get(url, stream=True).raw)
prompt = processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(prompt, [image], return_tensors="pt").to("cuda:0")
generation_args = {
"max_new_tokens": 500,
"temperature": 0.0,
"do_sample": False,
}
generate_ids = model.generate(**inputs, eos_token_id=processor.tokenizer.eos_token_id, **generation_args)
# remove input tokens
generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(response)
```
Additional basic examples are provided [here](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/blob/main/sample_inference.py).
## Responsible AI Considerations
Like other models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
+ Identification of individuals: models with vision capabilities may have the potential to uniquely identify individuals in images. Safety post-training steers the model to refuse such requests, but developers should consider and implement, as appropriate, additional mitigations or user consent flows as required in their respective jurisdiction, (e.g., building measures to blur faces in image inputs before processing.
## Training
### Model
* Architecture: Phi-3-Vision-128K-Instruct has 4.2B parameters and contains image encoder, connector, projector, and Phi-3 Mini language model.
* Inputs: Text and Image. It’s best suited for prompts using the chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 1.5 days
* Training data: 500B vision and text tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline text dataset with cutoff date Mar 15, 2024. Future versions of the tuned models may be released as we improve models.
* Release Type: Open weight release
* Release dates: The model weight is released on May 21, 2024.
### Datasets
Our training data includes a wide variety of sources, and is a combination of
1) publicly available documents filtered rigorously for quality, selected high-quality educational data and code;
2) selected high-quality image-text interleave;
3) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.), newly created image data, e.g., chart/table/diagram/slides;
4) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
The data collection process involved sourcing information from publicly available documents, with a meticulous approach to filtering out undesirable documents and images. To safeguard privacy, we carefully filtered various image and text data sources to remove or scrub any potentially personal data from the training data.
More details can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
## Benchmarks
To understand the capabilities, we compare Phi-3-Vision-128K-Instruct with a set of models over a variety of zero-shot benchmarks using our internal benchmark platform.
|Benchmark|Phi-3 Vision-128K-In|LlaVA-1.6 Vicuna-7B|QWEN-VL Chat|Llama3-Llava-Next-8B|Claude-3 Haiku|Gemini 1.0 Pro V|GPT-4V-Turbo|
|---------|---------------------|------------------|------------|--------------------|--------------|----------------|------------|
|MMMU|40.4|34.2|39.0|36.4|40.7|42.0|55.5|
|MMBench|80.5|76.3|75.8|79.4|62.4|80.0|86.1|
|ScienceQA|90.8|70.6|67.2|73.7|72.0|79.7|75.7|
|MathVista|44.5|31.5|29.4|34.8|33.2|35.0|47.5|
|InterGPS|38.1|20.5|22.3|24.6|32.1|28.6|41.0|
|AI2D|76.7|63.1|59.8|66.9|60.3|62.8|74.7|
|ChartQA|81.4|55.0|50.9|65.8|59.3|58.0|62.3|
|TextVQA|70.9|64.6|59.4|55.7|62.7|64.7|68.1|
|POPE|85.8|87.2|82.6|87.0|74.4|84.2|83.7|
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-Vision-128K model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
timm/nfnet_l0.ra2_in1k | timm | "2024-02-10T23:36:13Z" | 198,856 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2102.06171",
"arxiv:2101.08692",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-24T01:15:14Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for nfnet_l0.ra2_in1k
A NFNet-Lite (Lightweight NFNet) image classification model. Trained in `timm` by Ross Wightman.
Normalization Free Networks are (pre-activation) ResNet-like models without any normalization layers. Instead of Batch Normalization or alternatives, they use Scaled Weight Standardization and specifically placed scalar gains in residual path and at non-linearities based on signal propagation analysis.
Lightweight NFNets are `timm` specific variants that reduce the SE and bottleneck ratio from 0.5 -> 0.25 (reducing widths) and use a smaller group size while maintaining the same depth. SiLU activations used instead of GELU.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 35.1
- GMACs: 4.4
- Activations (M): 10.5
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- High-Performance Large-Scale Image Recognition Without Normalization: https://arxiv.org/abs/2102.06171
- Characterizing signal propagation to close the performance gap in unnormalized ResNets: https://arxiv.org/abs/2101.08692
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('nfnet_l0.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'nfnet_l0.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1536, 14, 14])
# torch.Size([1, 2304, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'nfnet_l0.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2304, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
```bibtex
@inproceedings{brock2021characterizing,
author={Andrew Brock and Soham De and Samuel L. Smith},
title={Characterizing signal propagation to close the performance gap in
unnormalized ResNets},
booktitle={9th International Conference on Learning Representations, {ICLR}},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
vikp/surya_rec | vikp | "2024-02-13T19:49:26Z" | 198,392 | 11 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-02-11T21:20:39Z" | ---
license: cc-by-nc-sa-4.0
---
Text recognition (ocr) model for [surya](https://github.com/VikParuchuri/surya). See repo for details. |
Systran/faster-whisper-large-v2 | Systran | "2023-11-23T11:44:31Z" | 197,333 | 20 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2023-11-23T09:50:45Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v2 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large-v2 --output_dir faster-whisper-large-v2 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v2).**
|