modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
google/mobilebert-uncased
1f90a6c24c7879273a291d34a849033eba2dbc0f
2021-04-19T13:32:58.000Z
[ "pytorch", "tf", "rust", "mobilebert", "pretraining", "en", "transformers", "license:apache-2.0" ]
null
false
google
null
google/mobilebert-uncased
115,430
4
transformers
200
--- language: en thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- ## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This checkpoint is the original MobileBert Optimized Uncased English: [uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz) checkpoint. ## How to use MobileBERT in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/mobilebert-uncased", tokenizer="google/mobilebert-uncased" ) print( fill_mask(f"HuggingFace is creating a {fill_mask.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) ```
hf-internal-testing/tiny-random-electra
e5efa6ccdff6b9cd0c387c90ff4af92686bc0c12
2021-09-17T19:22:20.000Z
[ "pytorch", "tf", "electra", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-electra
114,761
null
transformers
201
Entry not found
hf-internal-testing/tiny-random-albert
213e37e1c19981fbcee4f5f9ebefee8db74c8db1
2021-09-17T19:23:59.000Z
[ "pytorch", "tf", "albert", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-albert
114,473
null
transformers
202
Entry not found
sentence-transformers/all-mpnet-base-v1
5db7848555eaffcf26ac367f5dc9e0711acb2106
2021-08-31T07:35:23.000Z
[ "pytorch", "mpnet", "fill-mask", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "sentence-transformers", "feature-extraction", "sentence-similarity", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/all-mpnet-base-v1
113,021
3
sentence-transformers
203
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # all-mpnet-base-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v1') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v1) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 128 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,124,818,467** |
philschmid/bart-large-cnn-samsum
78e20b3792d507739ebb9e5a417bcc87606d3293
2022-07-04T13:10:54.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:samsum", "transformers", "sagemaker", "summarization", "model-index", "autotrain_compatible" ]
summarization
false
philschmid
null
philschmid/bart-large-cnn-samsum
112,471
16
transformers
204
--- language: en tags: - sagemaker - bart - summarization datasets: - samsum widget: - text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\ Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\ \ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\ \ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\n" model-index: - name: bart-large-cnn-samsum results: - task: type: summarization name: Summarization dataset: name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization' type: samsum metrics: - name: Validation ROGUE-1 type: rogue-1 value: 42.621 - name: Validation ROGUE-2 type: rogue-2 value: 21.9825 - name: Validation ROGUE-L type: rogue-l value: 33.034 - name: Test ROGUE-1 type: rogue-1 value: 41.3174 - name: Test ROGUE-2 type: rogue-2 value: 20.8716 - name: Test ROGUE-L type: rogue-l value: 32.1337 - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - name: ROUGE-1 type: rouge value: 41.3282 verified: true - name: ROUGE-2 type: rouge value: 20.8755 verified: true - name: ROUGE-L type: rouge value: 32.1353 verified: true - name: ROUGE-LSUM type: rouge value: 38.401 verified: true - name: loss type: loss value: 1.4297215938568115 verified: true - name: gen_len type: gen_len value: 60.0757 verified: true --- ## `bart-large-cnn-samsum` This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. For more information look at: - [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html) - [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) ## Hyperparameters ```json { "dataset_name": "samsum", "do_eval": true, "do_predict": true, "do_train": true, "fp16": true, "learning_rate": 5e-05, "model_name_or_path": "facebook/bart-large-cnn", "num_train_epochs": 3, "output_dir": "/opt/ml/model", "per_device_eval_batch_size": 4, "per_device_train_batch_size": 4, "predict_with_generate": true, "seed": 7 } ``` ## Usage ```python from transformers import pipeline summarizer = pipeline("summarization", model="philschmid/bart-large-cnn-samsum") conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok. Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face ''' summarizer(conversation) ``` ## Results | key | value | | --- | ----- | | eval_rouge1 | 42.621 | | eval_rouge2 | 21.9825 | | eval_rougeL | 33.034 | | eval_rougeLsum | 39.6783 | | test_rouge1 | 41.3174 | | test_rouge2 | 20.8716 | | test_rougeL | 32.1337 | | test_rougeLsum | 38.4149 |
cross-encoder/stsb-roberta-large
9e35bf01ec28b309411c8903d0d4165567303eb4
2021-08-05T08:42:03.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/stsb-roberta-large
111,228
3
transformers
205
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
Helsinki-NLP/opus-mt-ru-en
a06887d8d700245d4b10a20b6d89c1ad778f33c2
2022-07-14T08:56:05.000Z
[ "pytorch", "rust", "marian", "text2text-generation", "ru", "en", "transformers", "translation", "license:cc-by-4.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ru-en
110,930
6
transformers
206
--- tags: - translation license: cc-by-4.0 --- ### opus-mt-ru-en ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Transformer-align - **Language(s):** - Source Language: Russian - Target Language: English - **License:** CC-BY-4.0 - **Resources for more information:** - [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Uses #### Direct Use This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Further details about the dataset for this model can be found in the OPUS readme: [ru-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-en/README.md) ## Training #### Training Data ##### Preprocessing * Pre-processing: Normalization + SentencePiece * Dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT) * Download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.zip) * Test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.test.txt) ## Evaluation #### Results * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.eval.txt) #### Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2012.ru.en | 34.8 | 0.603 | | newstest2013.ru.en | 27.9 | 0.545 | | newstest2014-ruen.ru.en | 31.9 | 0.591 | | newstest2015-enru.ru.en | 30.4 | 0.568 | | newstest2016-enru.ru.en | 30.1 | 0.565 | | newstest2017-enru.ru.en | 33.4 | 0.593 | | newstest2018-enru.ru.en | 29.6 | 0.565 | | newstest2019-ruen.ru.en | 31.4 | 0.576 | | Tatoeba.ru.en | 61.1 | 0.736 | ## Citation Information ```bibtex @InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} } ``` ## How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-ru-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-ru-en") ```
sentence-transformers/paraphrase-distilroberta-base-v1
de91d53e03a544451c0e18312391a3f279f7f4ef
2022-06-15T20:03:06.000Z
[ "pytorch", "tf", "jax", "roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-distilroberta-base-v1
110,780
null
sentence-transformers
207
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-distilroberta-base-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-distilroberta-base-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-distilroberta-base-v1') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-distilroberta-base-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-distilroberta-base-v1) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
EleutherAI/gpt-neo-1.3B
797174552ae47f449ab70b684cabcb6603e5e85e
2021-12-31T13:48:33.000Z
[ "pytorch", "jax", "rust", "gpt_neo", "text-generation", "en", "dataset:the_pile", "transformers", "text generation", "causal-lm", "license:apache-2.0" ]
text-generation
false
EleutherAI
null
EleutherAI/gpt-neo-1.3B
110,054
40
transformers
208
--- language: - en tags: - text generation - pytorch - causal-lm license: apache-2.0 datasets: - the_pile --- # GPT-Neo 1.3B ## Model Description GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 1.3B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 380 billion tokens over 362,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results ### Linguistic Reasoning | Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag | | ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- | | **GPT-Neo 1.3B** | **0.7527** | **6.159** | **13.10** | **7.498** | **57.23%** | **55.01%** | **38.66%** | | GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% | | GPT-Neo 2.7B | 0.7165 | 5.646 | 11.39 | 5.626 | 62.22% | 56.50% | 42.73% | | GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% | ### Physical and Scientific Reasoning | Model and Size | MathQA | PubMedQA | Piqa | | ---------------- | ---------- | ---------- | ----------- | | **GPT-Neo 1.3B** | **24.05%** | **54.40%** | **71.11%** | | GPT-2 1.5B | 23.64% | 58.33% | 70.78% | | GPT-Neo 2.7B | 24.72% | 57.54% | 72.14% | | GPT-3 Ada | 24.29% | 52.80% | 68.88% | ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, please use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
hfl/chinese-electra-180g-small-discriminator
19998e79c480fa7f3892b3c035e4362fe497efcf
2021-03-03T01:04:26.000Z
[ "pytorch", "tf", "electra", "pretraining", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0" ]
null
false
hfl
null
hfl/chinese-electra-180g-small-discriminator
109,135
7
transformers
209
--- language: - zh license: "apache-2.0" --- # This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
seyonec/ChemBERTa-zinc-base-v1
761d6a18cf99db371e0b43baf3e2d21b3e865a20
2021-05-20T20:55:33.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "chemistry", "autotrain_compatible" ]
fill-mask
false
seyonec
null
seyonec/ChemBERTa-zinc-base-v1
109,134
1
transformers
210
--- tags: - chemistry --- # ChemBERTa: Training a BERT-like transformer model for masked language modelling of chemical SMILES strings. Deep learning for chemistry and materials science remains a novel field with lots of potiential. However, the popularity of transfer learning based methods in areas such as NLP and computer vision have not yet been effectively developed in computational chemistry + machine learning. Using HuggingFace's suite of models and the ByteLevel tokenizer, we are able to train on a large corpus of 100k SMILES strings from a commonly known benchmark dataset, ZINC. Training RoBERTa over 5 epochs, the model achieves a decent loss of 0.398, but may likely continue to decline if trained for a larger number of epochs. The model can predict tokens within a SMILES sequence/molecule, allowing for variants of a molecule within discoverable chemical space to be predicted. By applying the representations of functional groups and atoms learned by the model, we can try to tackle problems of toxicity, solubility, drug-likeness, and synthesis accessibility on smaller datasets using the learned representations as features for graph convolution and attention models on the graph structure of molecules, as well as fine-tuning of BERT. Finally, we propose the use of attention visualization as a helpful tool for chemistry practitioners and students to quickly identify important substructures in various chemical properties. Additionally, visualization of the attention mechanism have been seen through previous research as incredibly valuable towards chemical reaction classification. The applications of open-sourcing large-scale transformer models such as RoBERTa with HuggingFace may allow for the acceleration of these individual research directions. A link to a repository which includes the training, uploading and evaluation notebook (with sample predictions on compounds such as Remdesivir) can be found [here](https://github.com/seyonechithrananda/bert-loves-chemistry). All of the notebooks can be copied into a new Colab runtime for easy execution. Thanks for checking this out! - Seyone
vinai/bertweet-base
f9365bd897a63399564af0859aa981deb6deb0f3
2022-06-08T04:43:30.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
vinai
null
vinai/bertweet-base
108,322
11
transformers
211
# <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure. The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic. The general architecture and experimental results of BERTweet can be found in our [paper](https://aclanthology.org/2020.emnlp-demos.2/): @inproceedings{bertweet, title = {{BERTweet: A pre-trained language model for English Tweets}}, author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, pages = {9--14}, year = {2020} } **Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software. For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)! ### Main results <p float="left"> <img width="275" alt="postagging" src="https://user-images.githubusercontent.com/2412555/135724590-01d8d435-262d-44fe-a383-cd39324fe190.png" /> <img width="275" alt="ner" src="https://user-images.githubusercontent.com/2412555/135724598-1e3605e7-d8ce-4c5e-be4a-62ae8501fae7.png" /> </p> <p float="left"> <img width="275" alt="sentiment" src="https://user-images.githubusercontent.com/2412555/135724597-f1981f1e-fe73-4c03-b1ff-0cae0cc5f948.png" /> <img width="275" alt="irony" src="https://user-images.githubusercontent.com/2412555/135724595-15f4f2c8-bbb6-4ee6-82a0-034769dec183.png" /> </p>
Sahajtomar/German_Zeroshot
d5b0a26665b8538bcb3faa1e63a634cca4c8ee1b
2021-05-18T22:22:18.000Z
[ "pytorch", "jax", "bert", "text-classification", "multilingual", "dataset:xnli", "transformers", "nli", "xnli", "de", "zero-shot-classification" ]
zero-shot-classification
false
Sahajtomar
null
Sahajtomar/German_Zeroshot
108,175
9
transformers
212
--- language: multilingual tags: - text-classification - pytorch - nli - xnli - de datasets: - xnli pipeline_tag: zero-shot-classification widget: - text: "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie" candidate_labels: "Verbrechen,Tragödie,Stehlen" hypothesis_template: "In deisem geht es um {}." --- # German Zeroshot ## Model Description This model has [GBERT Large](https://huggingface.co/deepset/gbert-large) as base model and fine-tuned it on xnli de dataset. The default hypothesis template is in English: `This text is {}`. While using this model , change it to "In deisem geht es um {}." or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly. ## XNLI DEV (german) Accuracy: 85.5 ## XNLI TEST (german) Accuracy: 83.6 #### Zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Sahajtomar/German_Zeroshot") sequence = "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie" candidate_labels = ["Verbrechen","Tragödie","Stehlen"] hypothesis_template = "In deisem geht es um {}." ## Since monolingual model,its sensitive to hypothesis template. This can be experimented classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template) """{'labels': ['Tragödie', 'Verbrechen', 'Stehlen'], 'scores': [0.8328856854438782, 0.10494536352157593, 0.06316883927583696], 'sequence': 'Letzte Woche gab es einen Selbstmord in einer nahe gelegenen Kolonie'}""" ```
bert-base-german-dbmdz-uncased
78540bb3887dee683eb7d1dc01abe831fe2d1b6d
2022-07-18T20:04:08.000Z
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-german-dbmdz-uncased
106,927
2
transformers
213
--- language: de license: mit --- This model is the same as [dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-uncased) for details on the model.
smanjil/German-MedBERT
b4e8a3e260ca938390616816402ab23d98775b07
2022-06-13T16:52:46.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "de", "transformers", "exbert", "German", "autotrain_compatible" ]
fill-mask
false
smanjil
null
smanjil/German-MedBERT
106,645
3
transformers
214
--- language: de tags: - exbert - German --- <a href="https://huggingface.co/exbert/?model=smanjil/German-MedBERT"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> # German Medical BERT This is a fine-tuned model on the Medical domain for the German language and based on German BERT. This model has only been trained to improve on-target tasks (Masked Language Model). It can later be used to perform a downstream task of your needs, while I performed it for the NTS-ICD-10 text classification task. ## Overview **Language model:** bert-base-german-cased **Language:** German **Fine-tuning:** Medical articles (diseases, symptoms, therapies, etc..) **Eval data:** NTS-ICD-10 dataset (Classification) **Infrastructure:** Google Colab ## Details - We fine-tuned using Pytorch with Huggingface library on Colab GPU. - With standard parameter settings for fine-tuning as mentioned in the original BERT paper. - Although had to train for up to 25 epochs for classification. ## Performance (Micro precision, recall, and f1 score for multilabel code classification) |Models|P|R|F1| |:------|:------|:------|:------| |German BERT|86.04|75.82|80.60| |German MedBERT-256 (fine-tuned)|87.41|77.97|82.42| |German MedBERT-512 (fine-tuned)|87.75|78.26|82.73| ## Author Manjil Shrestha: `shresthamanjil21 [at] gmail.com` ## Related Paper: [Report](https://opus4.kobv.de/opus4-rhein-waal/frontdoor/index/index/searchtype/collection/id/16225/start/0/rows/10/doctypefq/masterthesis/docId/740) Get in touch: [LinkedIn](https://www.linkedin.com/in/manjil-shrestha-038527b4/)
facebook/contriever
2bd46a25019aeea091fd42d1f0fd4801675cf699
2022-01-19T17:23:28.000Z
[ "pytorch", "bert", "arxiv:2112.09118", "transformers" ]
null
false
facebook
null
facebook/contriever
104,025
1
transformers
215
This model has been trained without supervision following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever. ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding. ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('facebook/contriever') model = AutoModel.from_pretrained('facebook/contriever') sentences = [ "Where was Marie Curie born?", "Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.", "Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace." ] # Apply tokenizer inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings outputs = model(**inputs) # Mean pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings embeddings = mean_pooling(outputs[0], inputs['attention_mask']) ```
T-Systems-onsite/cross-en-de-roberta-sentence-transformer
17bdd111a862ec99279be149fc9efa4f9122bcc1
2022-06-16T18:13:54.000Z
[ "pytorch", "tf", "xlm-roberta", "feature-extraction", "de", "en", "dataset:STSbenchmark", "arxiv:1908.10084", "transformers", "sentence_embedding", "search", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase", "license:mit" ]
feature-extraction
false
T-Systems-onsite
null
T-Systems-onsite/cross-en-de-roberta-sentence-transformer
103,768
4
transformers
216
--- language: - de - en license: mit tags: - sentence_embedding - search - pytorch - xlm-roberta - roberta - xlm-r-distilroberta-base-paraphrase-v1 - paraphrase datasets: - STSbenchmark metrics: - Spearman’s rank correlation - cosine similarity --- # Cross English & German RoBERTa for Sentence Embeddings This model is intended to [compute sentence (text) embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html) for English and German text. These embeddings can then be compared with [cosine-similarity](https://en.wikipedia.org/wiki/Cosine_similarity) to find sentences with a similar semantic meaning. For example this can be useful for [semantic textual similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html), [semantic search](https://www.sbert.net/docs/usage/semantic_search.html), or [paraphrase mining](https://www.sbert.net/docs/usage/paraphrase_mining.html). To do this you have to use the [Sentence Transformers Python framework](https://github.com/UKPLab/sentence-transformers). The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and _multilingual finetuning with language-crossing_ we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below). > Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. Source: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) This model is fine-tuned from [Philip May](https://may.la/) and open-sourced by [T-Systems-onsite](https://www.t-systems-onsite.de/). Special thanks to [Nils Reimers](https://www.nils-reimers.de/) for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub. ## How to use To use this model install the `sentence-transformers` package (see here: <https://github.com/UKPLab/sentence-transformers>). ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer') ``` For details of usage and examples see here: - [Computing Sentence Embeddings](https://www.sbert.net/docs/usage/computing_sentence_embeddings.html) - [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html) - [Paraphrase Mining](https://www.sbert.net/docs/usage/paraphrase_mining.html) - [Semantic Search](https://www.sbert.net/docs/usage/semantic_search.html) - [Cross-Encoders](https://www.sbert.net/docs/usage/cross-encoder.html) - [Examples on GitHub](https://github.com/UKPLab/sentence-transformers/tree/master/examples) ## Training The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). This model has been further trained by [Nils Reimers](https://www.nils-reimers.de/) on a large scale paraphrase dataset for 50+ languages. [Nils Reimers](https://www.nils-reimers.de/) about this [on GitHub](https://github.com/UKPLab/sentence-transformers/issues/509#issuecomment-712243280): >A paper is upcoming for the paraphrase models. > >These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc. > >In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains. > >More details with the setup, all the datasets, and a wider evaluation will follow soon. The resulting model called `xlm-r-distilroberta-base-paraphrase-v1` has been released here: <https://github.com/UKPLab/sentence-transformers/releases/tag/v0.3.8> Building on this cross language model we fine-tuned it for English and German language on the [STSbenchmark](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) dataset. For German language we used the dataset of our [German STSbenchmark dataset](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark) which has been translated with [deepl.com](https://www.deepl.com/translator). Additionally to the German and English training samples we generated samples of English and German crossed. We call this _multilingual finetuning with language-crossing_. It doubled the traing-datasize and tests show that it further improves performance. We did an automatic hyperparameter search for 33 trials with [Optuna](https://github.com/optuna/optuna). Using 10-fold crossvalidation on the deepl.com test and dev dataset we found the following best hyperparameters: - batch_size = 8 - num_epochs = 2 - lr = 1.026343323298136e-05, - eps = 4.462251033010287e-06 - weight_decay = 0.04794438776350409 - warmup_steps_proportion = 0.1609010732760181 The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing. # Evaluation The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on [Colab](https://colab.research.google.com/drive/1gtGnKq_dYU_sDYqMohTYVMVpxMJjyH0M?usp=sharing). As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels. | Model Name | Spearman<br/>German | Spearman<br/>English | Spearman<br/>EN-DE & DE-EN<br/>(cross) | |---------------------------------------------------------------|-------------------|--------------------|------------------| | xlm-r-distilroberta-base-paraphrase-v1 | 0.8079 | 0.8350 | 0.7983 | | [xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) | 0.7877 | 0.8465 | 0.7908 | | xlm-r-bert-base-nli-stsb-mean-tokens | 0.7877 | 0.8465 | 0.7908 | | [roberta-large-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/roberta-large-nli-stsb-mean-tokens) | 0.6371 | 0.8639 | 0.4109 | | [T-Systems-onsite/<br/>german-roberta-sentence-transformer-v2](https://huggingface.co/T-Systems-onsite/german-roberta-sentence-transformer-v2) | 0.8529 | 0.8634 | 0.8415 | | **T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer** | **0.8550** | **0.8660** | **0.8525** | ## License Copyright (c) 2020 Philip May, T-Systems on site services GmbH Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer/blob/main/LICENSE) in the repository.
typeform/distilbert-base-uncased-mnli
996dacf8ea284d96ea21f88a345fd7d597de1f1f
2022-06-24T15:43:48.000Z
[ "pytorch", "tf", "distilbert", "text-classification", "en", "dataset:multi_nli", "arxiv:1910.09700", "arxiv:2105.09680", "transformers", "zero-shot-classification" ]
zero-shot-classification
false
typeform
null
typeform/distilbert-base-uncased-mnli
102,610
11
transformers
217
--- language: en pipeline_tag: zero-shot-classification tags: - distilbert datasets: - multi_nli metrics: - accuracy --- # DistilBERT base model (uncased) ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) ## Model Details **Model Description:** This is the [uncased DistilBERT model](https://huggingface.co/distilbert-base-uncased) fine-tuned on [Multi-Genre Natural Language Inference](https://huggingface.co/datasets/multi_nli) (MNLI) dataset for the zero-shot classification task. - **Developed by:** The [Typeform](https://www.typeform.com/) team. - **Model Type:** Zero-Shot Classification - **Language(s):** English - **License:** Unknown - **Parent Model:** See the [distilbert base uncased model](https://huggingface.co/distilbert-base-uncased) for more information about the Distilled-BERT base model. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("typeform/distilbert-base-uncased-mnli") model = AutoModelForSequenceClassification.from_pretrained("typeform/distilbert-base-uncased-mnli") ``` ## Uses This model can be used for text classification tasks. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Data This model of DistilBERT-uncased is pretrained on the Multi-Genre Natural Language Inference [(MultiNLI)](https://huggingface.co/datasets/multi_nli) corpus. It is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation. This model is also **not** case-sensitive, i.e., it does not make a difference between "english" and "English". #### Training Procedure Training is done on a [p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) AWS EC2 with the following hyperparameters: ``` $ run_glue.py \ --model_name_or_path distilbert-base-uncased \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 2e-5 \ --num_train_epochs 5 \ --output_dir /tmp/distilbert-base-uncased_mnli/ ``` ## Evaluation #### Evaluation Results When fine-tuned on downstream tasks, this model achieves the following results: - **Epoch = ** 5.0 - **Evaluation Accuracy =** 0.8206875508543532 - **Evaluation Loss =** 0.8706700205802917 - ** Evaluation Runtime = ** 17.8278 - ** Evaluation Samples per second = ** 551.498 MNLI and MNLI-mm results: | Task | MNLI | MNLI-mm | |:----:|:----:|:----:| | | 82.0 | 82.0 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). **Hardware Type:** 1 NVIDIA Tesla V100 GPUs **Hours used:** Unknown **Cloud Provider:** AWS EC2 P3 **Compute Region:** Unknown **Carbon Emitted:** (Power consumption x Time x Carbon produced based on location of power grid): Unknown
cardiffnlp/twitter-roberta-base-offensive
afebc0c0c4c58177a8e6ab683c25beffeb351135
2021-05-20T15:05:00.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "arxiv:2010.12421", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-offensive
101,518
4
transformers
218
# Twitter-roBERTa-base for Offensive Language Identification This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval). ## Example of classification ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='offensive' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping labels=[] mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Good night 😊" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Good night 😊" # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = labels[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) not-offensive 0.9073 2) offensive 0.0927 ```
openai-gpt
b3ab1942f7090e287d001cec22331dfc2764acf0
2022-07-22T07:57:33.000Z
[ "pytorch", "tf", "rust", "openai-gpt", "text-generation", "en", "arxiv:1705.11168", "arxiv:1803.02324", "arxiv:1910.09700", "transformers", "license:mit" ]
text-generation
false
null
null
openai-gpt
100,975
7
transformers
219
--- language: en license: mit --- # OpenAI GPT ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** `openai-gpt` is a transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies. - **Developed by:** Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. See [associated research paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) and [GitHub repo](https://github.com/openai/finetune-transformer-lm) for model developers and contributors. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [MIT License](https://github.com/openai/finetune-transformer-lm/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Medium](https://huggingface.co/gpt2-medium), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) - [OpenAI Blog Post](https://openai.com/blog/language-unsupervised/) - [GitHub Repo](https://github.com/openai/finetune-transformer-lm) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='openai-gpt') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model,'he said, when i was finished.'ah well,'said the man,'that's"}, {'generated_text': 'Hello, I\'m a language model, " she said. \n she reached the bottom of the shaft and leaned a little further out. it was'}, {'generated_text': 'Hello, I\'m a language model, " she laughed. " we call that a\'white girl.\'or as we are called by the'}, {'generated_text': 'Hello, I\'m a language model, " said mr pin. " an\'the ones with the funny hats don\'t. " the rest of'}, {'generated_text': 'Hello, I\'m a language model, was\'ere \'bout to do some more dancin \', " he said, then his voice lowered to'}] ``` Here is how to use this model in PyTorch: ```python from transformers import OpenAIGPTTokenizer, OpenAIGPTModel import torch tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") model = OpenAIGPTModel.from_pretrained("openai-gpt") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` and in TensorFlow: ```python from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") model = TFOpenAIGPTModel.from_pretrained("openai-gpt") inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") outputs = model(inputs) last_hidden_states = outputs.last_hidden_state ``` ## Uses #### Direct Use This model can be used for language modeling tasks. #### Downstream Use Potential downstream uses of this model include tasks that leverage language models. In the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification. #### Misuse and Out-of-scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases #### Biases **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='openai-gpt') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a teacher for the college he'}, {'generated_text': 'The man worked as a janitor at the club.'}, {'generated_text': 'The man worked as a bodyguard in america. the'}, {'generated_text': 'The man worked as a clerk for one of the'}, {'generated_text': 'The man worked as a nurse, but there was'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a medical intern but is a'}, {'generated_text': 'The woman worked as a midwife, i know that'}, {'generated_text': 'The woman worked as a prostitute in a sex club'}, {'generated_text': 'The woman worked as a secretary for one of the'}, {'generated_text': 'The woman worked as a nurse, but she had'}] ``` This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. #### Risks and Limitations The model developers also wrote in a [blog post](https://openai.com/blog/language-unsupervised/) about risks and limitations of the model, including: > - **Compute Requirements:** Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements. > - **The limits and bias of learning about the world through text:** Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work ([Lucy and Gauthier, 2017](https://arxiv.org/abs/1705.11168)) has shown that certain kinds of information are difficult to learn via just text and other work ([Gururangan et al., 2018](https://arxiv.org/abs/1803.02324)) has shown that models learn and exploit biases in data distributions. > - **Still brittle generalization:** Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet. ## Training #### Training Data The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > We use the BooksCorpus dataset ([Zhu et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf)) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information. #### Training Procedure The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer. See the paper for further details and links to citations. ## Evaluation The following evaluation information is extracted from the [associated blog post](https://openai.com/blog/language-unsupervised/). See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for further details. #### Testing Data, Factors and Metrics The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: - **Task:** Textual Entailment - **Datasets:** [SNLI](https://huggingface.co/datasets/snli), [MNLI Matched](https://huggingface.co/datasets/glue), [MNLI Mismatched](https://huggingface.co/datasets/glue), [SciTail](https://huggingface.co/datasets/scitail), [QNLI](https://huggingface.co/datasets/glue), [RTE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Semantic Similarity - **Datasets:** [STS-B](https://huggingface.co/datasets/glue), [QQP](https://huggingface.co/datasets/glue), [MRPC](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Reading Comprehension - **Datasets:** [RACE](https://huggingface.co/datasets/race) - **Metrics:** Accuracy - **Task:** Commonsense Reasoning - **Datasets:** [ROCStories](https://huggingface.co/datasets/story_cloze), [COPA](https://huggingface.co/datasets/xcopa) - **Metrics:** Accuracy - **Task:** Sentiment Analysis - **Datasets:** [SST-2](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Linguistic Acceptability - **Datasets:** [CoLA](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Multi Task Benchmark - **Datasets:** [GLUE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy #### Results The model achieves the following results without any fine-tuning (zero-shot): | Task | TE | TE | TE |TE | TE | TE | SS | SS | SS | RC | CR | CR | SA | LA | MTB | |:--------:|:--:|:----------:|:-------------:|:-----:|:----:|:---:|:---:|:---:|:--:|:----:|:--------:|:----:|:----:|:----:|:----:| | Dataset |SNLI|MNLI Matched|MNLI Mismatched|SciTail| QNLI | RTE |STS-B| QQP |MPRC|RACE |ROCStories|COPA | SST-2| CoLA | GLUE | | |89.9| 82.1 | 81.4 |88.3 | 88.1 | 56.0|82.0 | 70.3|82.3|59.0 | 86.5 | 78.6 | 91.3 | 45.4 | 72.8 | ## Environmental Impact The model developers [report that](https://openai.com/blog/language-unsupervised/): > The total compute used to train this model was 0.96 petaflop days (pfs-days). > 8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8 P600 GPUs - **Hours used:** 720 hours (30 days) - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2018improving, title={Improving language understanding by generative pre-training}, author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya and others}, year={2018}, publisher={OpenAI} } ``` APA: *Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.* ## Model Card Authors This model card was written by the Hugging Face team.
DeepPavlov/distilrubert-base-cased-conversational
9b4f8c20bdc51934ef2ef586ef9afee85549cccb
2022-05-06T11:58:43.000Z
[ "pytorch", "distilbert", "ru", "arxiv:2205.02340", "transformers" ]
null
false
DeepPavlov
null
DeepPavlov/distilrubert-base-cased-conversational
100,957
1
transformers
220
--- language: - ru --- # distilrubert-base-cased-conversational Conversational DistilRuBERT \(Russian, cased, 6‑layer, 768‑hidden, 12‑heads, 135.4M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). Our DistilRuBERT was highly inspired by \[3\], \[4\]. Namely, we used * KL loss (between teacher and student output logits) * MLM loss (between tokens labels and student output logits) * Cosine embedding loss between mean of two consecutive hidden states of the teacher and one hidden state of the student The model was trained for about 100 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb. To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency). All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb. | Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. | |-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------| | Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 | | Student (DistilRuBERT-base-cased-conversational)| 517 | 0.3285 | 0.0212 | 0.5803 | 52.2495 | # Citation If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper: ``` @misc{https://doi.org/10.48550/arxiv.2205.02340, doi = {10.48550/ARXIV.2205.02340}, url = {https://arxiv.org/abs/2205.02340}, author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` \[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\) \[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017. \[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. \[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
Helsinki-NLP/opus-mt-en-fr
a8fbc1c711cb6263e8a20c5229b210cc05c57ff0
2021-09-09T21:35:24.000Z
[ "pytorch", "jax", "marian", "text2text-generation", "en", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-fr
100,160
3
transformers
221
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-fr * source languages: en * target languages: fr * OPUS readme: [en-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdiscussdev2015-enfr.en.fr | 33.8 | 0.602 | | newsdiscusstest2015-enfr.en.fr | 40.0 | 0.643 | | newssyscomb2009.en.fr | 29.8 | 0.584 | | news-test2008.en.fr | 27.5 | 0.554 | | newstest2009.en.fr | 29.4 | 0.577 | | newstest2010.en.fr | 32.7 | 0.596 | | newstest2011.en.fr | 34.3 | 0.611 | | newstest2012.en.fr | 31.8 | 0.592 | | newstest2013.en.fr | 33.2 | 0.589 | | Tatoeba.en.fr | 50.5 | 0.672 |
hf-internal-testing/tiny-random-bart
f2efe525625e508121ef8e13b7c37e6324073378
2021-11-18T11:36:51.000Z
[ "pytorch", "tf", "bart", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-bart
100,122
null
transformers
222
Entry not found
microsoft/mpnet-base
5b7474c98ab5f1801502f9d2348485acf4cbbe71
2020-12-03T15:59:01.000Z
[ "pytorch", "tf", "mpnet", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
microsoft
null
microsoft/mpnet-base
96,566
8
transformers
223
Entry not found
ramsrigouthamg/t5-large-paraphraser-diverse-high-quality
443d721ecccee1cb38cce6f50cfd6c15e44e6ea0
2021-09-21T05:21:49.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ramsrigouthamg
null
ramsrigouthamg/t5-large-paraphraser-diverse-high-quality
96,120
11
transformers
224
Blog post with more details as well as easy to use Google Colab link: https://towardsdatascience.com/high-quality-sentence-paraphraser-using-transformers-in-nlp-c33f4482856f !pip install transformers==4.10.2 !pip install sentencepiece==0.1.96 ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("ramsrigouthamg/t5-large-paraphraser-diverse-high-quality") tokenizer = AutoTokenizer.from_pretrained("ramsrigouthamg/t5-large-paraphraser-diverse-high-quality") import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print ("device ",device) model = model.to(device) # Beam Search context = "Once, a group of frogs were roaming around the forest in search of water." text = "paraphrase: "+context + " </s>" encoding = tokenizer.encode_plus(text,max_length =128, padding=True, return_tensors="pt") input_ids,attention_mask = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) model.eval() beam_outputs = model.generate( input_ids=input_ids,attention_mask=attention_mask, max_length=128, early_stopping=True, num_beams=15, num_return_sequences=3 ) print ("\n\n") print ("Original: ",context) for beam_output in beam_outputs: sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print (sent) ``` **Output from the above code** ``` Original: Once, a group of frogs were roaming around the forest in search of water. paraphrasedoutput: A herd of frogs were wandering around the woods in search of water. paraphrasedoutput: A herd of frogs was wandering around the woods in search of water. paraphrasedoutput: A herd of frogs were wandering around the forest in search of water at one time. ```
dbmdz/distilbert-base-turkish-cased
2fffa20b389e66113ec7182349efdec00fce1ff5
2021-01-24T01:01:22.000Z
[ "pytorch", "tf", "distilbert", "tr", "arxiv:1910.01108", "transformers", "license:mit" ]
null
false
dbmdz
null
dbmdz/distilbert-base-turkish-cased
95,636
5
transformers
225
--- language: tr license: mit --- # 🤗 + 📚 dbmdz Distilled Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a (cased) distilled model for Turkish 🎉 # 🇹🇷 DistilBERTurk DistilBERTurk is a community-driven cased distilled BERT model for Turkish. DistilBERTurk was trained on 7GB of the original training data that was used for training [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master#stats), using the cased version of BERTurk as teacher model. *DistilBERTurk* was trained with the official Hugging Face implementation from [here](https://github.com/huggingface/transformers/tree/master/examples/distillation) for 5 days on 4 RTX 2080 TI. More details about distillation can be found in the ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) paper by Sanh et al. (2019). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue in the [BERTurk](https://github.com/stefan-it/turkish-bert) repository! | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/distilbert-base-turkish-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/vocab.txt) ## Usage With Transformers >= 2.3 our DistilBERTurk model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/distilbert-base-turkish-cased") model = AutoModel.from_pretrained("dbmdz/distilbert-base-turkish-cased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). For PoS tagging, DistilBERTurk outperforms the 24-layer XLM-RoBERTa model. The overall performance difference between DistilBERTurk and the original (teacher) BERTurk model is ~1.18%. # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
Vamsi/T5_Paraphrase_Paws
3bbf07dc42d5ddc9ca77c5589ce7239b0b731832
2021-06-23T11:39:51.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "en", "transformers", "paraphrase-generation", "text-generation", "Conditional Generation", "autotrain_compatible" ]
text-generation
false
Vamsi
null
Vamsi/T5_Paraphrase_Paws
95,004
10
transformers
226
--- language: "en" tags: - paraphrase-generation - text-generation - Conditional Generation inference: false --- ​ # Paraphrase-Generation ​ ## Model description ​ T5 Model for generating paraphrases of english sentences. Trained on the [Google PAWS](https://github.com/google-research-datasets/paws) dataset. ​ ## How to use ​ PyTorch and TF models available ​ ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("Vamsi/T5_Paraphrase_Paws") model = AutoModelForSeq2SeqLM.from_pretrained("Vamsi/T5_Paraphrase_Paws") ​ sentence = "This is something which i cannot understand at all" text = "paraphrase: " + sentence + " </s>" encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, do_sample=True, top_k=120, top_p=0.95, early_stopping=True, num_return_sequences=5 ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print(line) ​ ``` For more reference on training your own T5 model or using this model, do check out [Paraphrase Generation](https://github.com/Vamsi995/Paraphrase-Generator).
unc-nlp/lxmert-base-uncased
628572c96242d1496147beec1c13a1bb7869605d
2021-03-10T02:39:25.000Z
[ "pytorch", "tf", "lxmert", "feature-extraction", "transformers" ]
feature-extraction
false
unc-nlp
null
unc-nlp/lxmert-base-uncased
94,868
null
transformers
227
Entry not found
ThatSkyFox/DialoGPT-small-joshua
b42987a9651745cfa1112354b16a1c454492045f
2021-10-24T17:12:13.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ThatSkyFox
null
ThatSkyFox/DialoGPT-small-joshua
94,281
null
transformers
228
--- tags: - conversational --- #This is a chatbot trained on the transcript of the game "The World Ends with You"
RUCAIBox/mvp
c1d9aeb879f3079101f716f1f6e7109fdd18b4e9
2022-06-27T02:27:44.000Z
[ "pytorch", "mvp", "en", "arxiv:2206.12131", "transformers", "text-generation", "text2text-generation", "summarization", "conversational", "license:apache-2.0" ]
text2text-generation
false
RUCAIBox
null
RUCAIBox/mvp
93,960
1
transformers
229
--- license: apache-2.0 language: - en tags: - text-generation - text2text-generation - summarization - conversational pipeline_tag: text2text-generation widget: - text: "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons." example_title: "Summarization" - text: "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?" example_title: "Dialog" - text: "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man" example_title: "Data-to-text" - text: "Given the story title: I think all public schools should have a uniform dress code." example_title: "Story Generation" - text: "Answer the following question: From which country did Angola achieve independence in 1975?" example_title: "Question Answering" - text: "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing ." example_title: "Question Generaion" --- # MVP The MVP model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MVP is supervised pre-trained using a mixture of labeled datasets. It follows a standard Transformer encoder-decoder architecture. MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering. ## Examples For summarization: ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp") >>> inputs = tokenizer( ... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ["Why You Shouldn't Quit Your Job"] ``` For data-to-text generation: ```python >>> from transformers import MvpTokenizerFast, MvpForConditionalGeneration >>> tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp") >>> inputs = tokenizer( ... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic'] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
Davlan/bert-base-multilingual-cased-ner-hrl
6f69c39cadcdba0ab1401fb1f164964e7557e471
2022-06-25T17:01:26.000Z
[ "pytorch", "tf", "bert", "token-classification", "ar", "de", "en", "es", "fr", "it", "lv", "nl", "pt", "zh", "multilingual", "transformers", "autotrain_compatible" ]
token-classification
false
Davlan
null
Davlan/bert-base-multilingual-cased-ner-hrl
93,549
9
transformers
230
Hugging Face's logo --- language: - ar - de - en - es - fr - it - lv - nl - pt - zh - multilingual --- # bert-base-multilingual-cased-ner-hrl ## Model description **bert-base-multilingual-cased-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned mBERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on an aggregation of 10 high-resourced languages ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl") model = AutoModelForTokenClassification.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute." ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data The training data for the 10 languages are from: Language|Dataset -|- Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/) English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/) Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/) French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio) Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html) Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities) Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/) Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese) Chinese | [MSRA](https://huggingface.co/datasets/msra_ner) The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location ## Training procedure This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
google/mt5-base
d86816880b5acc27e697e52bc237e816dc828b17
2022-05-27T15:05:12.000Z
[ "pytorch", "tf", "jax", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/mt5-base
93,236
29
transformers
231
--- language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 license: apache-2.0 --- [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
microsoft/MiniLM-L12-H384-uncased
44acabbec0ef496f6dbc93adadea57f376b7c0ec
2021-05-19T23:29:48.000Z
[ "pytorch", "tf", "jax", "bert", "arxiv:2002.10957", "arxiv:1810.04805", "transformers", "text-classification", "license:mit" ]
text-classification
false
microsoft
null
microsoft/MiniLM-L12-H384-uncased
92,474
14
transformers
232
--- thumbnail: https://huggingface.co/front/thumbnails/microsoft.png tags: - text-classification license: mit --- ## MiniLM: Small and Fast Pre-trained Models for Language Understanding and Generation MiniLM is a distilled model from the paper "[MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers](https://arxiv.org/abs/2002.10957)". Please find the information about preprocessing, training and full details of the MiniLM in the [original MiniLM repository](https://github.com/microsoft/unilm/blob/master/minilm/). Please note: This checkpoint can be an inplace substitution for BERT and it needs to be fine-tuned before use! ### English Pre-trained Models We release the **uncased** **12**-layer model with **384** hidden size distilled from an in-house pre-trained [UniLM v2](/unilm) model in BERT-Base size. - MiniLMv1-L12-H384-uncased: 12-layer, 384-hidden, 12-heads, 33M parameters, 2.7x faster than BERT-Base #### Fine-tuning on NLU tasks We present the dev results on SQuAD 2.0 and several GLUE benchmark tasks. | Model | #Param | SQuAD 2.0 | MNLI-m | SST-2 | QNLI | CoLA | RTE | MRPC | QQP | |---------------------------------------------------|--------|-----------|--------|-------|------|------|------|------|------| | [BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 109M | 76.8 | 84.5 | 93.2 | 91.7 | 58.9 | 68.6 | 87.3 | 91.3 | | **MiniLM-L12xH384** | 33M | 81.7 | 85.7 | 93.0 | 91.5 | 58.5 | 73.3 | 89.5 | 91.3 | ### Citation If you find MiniLM useful in your research, please cite the following paper: ``` latex @misc{wang2020minilm, title={MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers}, author={Wenhui Wang and Furu Wei and Li Dong and Hangbo Bao and Nan Yang and Ming Zhou}, year={2020}, eprint={2002.10957}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
sentence-transformers/distiluse-base-multilingual-cased
47709c6fb2c53d30c871b05b8fb2693a5428d96a
2022-06-21T14:55:22.000Z
[ "pytorch", "tf", "rust", "distilbert", "feature-extraction", "multilingual", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/distiluse-base-multilingual-cased
91,678
2
sentence-transformers
233
--- pipeline_tag: sentence-similarity language: multilingual license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/distiluse-base-multilingual-cased This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
joeddav/bart-large-mnli-yahoo-answers
d836606b3cf20652cf30283d6884ae26a11e5392
2021-06-14T10:44:33.000Z
[ "pytorch", "jax", "bart", "text-classification", "en", "dataset:yahoo-answers", "arxiv:1909.00161", "transformers", "zero-shot-classification" ]
zero-shot-classification
false
joeddav
null
joeddav/bart-large-mnli-yahoo-answers
91,462
3
transformers
234
--- language: en tags: - text-classification - pytorch datasets: - yahoo-answers pipeline_tag: zero-shot-classification --- # bart-lage-mnli-yahoo-answers ## Model Description This model takes [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) and fine-tunes it on Yahoo Answers topic classification. It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before. You can play with an interactive demo of this zero-shot technique with this model, as well as the non-finetuned [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli), [here](https://huggingface.co/zero-shot/). ## Intended Usage This model was fine-tuned on topic classification and will perform best at zero-shot topic classification. Use `hypothesis_template="This text is about {}."` as this is the template used during fine-tuning. For settings other than topic classification, you can use any model pre-trained on MNLI such as [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) with the same code as written below. #### With the zero-shot classification pipeline The model can be used with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline nlp = pipeline("zero-shot-classification", model="joeddav/bart-large-mnli-yahoo-answers") sequence_to_classify = "Who are you voting for in 2020?" candidate_labels = ["Europe", "public health", "politics", "elections"] hypothesis_template = "This text is about {}." nlp(sequence_to_classify, candidate_labels, multi_class=True, hypothesis_template=hypothesis_template) ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import BartForSequenceClassification, BartTokenizer nli_model = BartForSequenceClassification.from_pretrained('joeddav/bart-large-mnli-yahoo-answers') tokenizer = BartTokenizer.from_pretrained('joeddav/bart-large-mnli-yahoo-answers') premise = sequence hypothesis = f'This text is about {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', max_length=tokenizer.max_len, truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ``` ## Training The model is a pre-trained MNLI classifier further fine-tuned on Yahoo Answers topic classification in the manner originally described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html). That is, each sequence is fed to the pre-trained NLI model in place of the premise and each candidate label as the hypothesis, formatted like so: `This text is about {class name}.` For each example in the training set, a true and a randomly-selected false label hypothesis are fed to the model which must predict which labels are valid and which are false. Since this method studies the ability to classify unseen labels after being trained on a different set of labels, the model is only trained on 5 out of the 10 labels in Yahoo Answers. These are "Society & Culture", "Health", "Computers & Internet", "Business & Finance", and "Family & Relationships". ## Evaluation Results This model was evaluated with the label-weighted F1 of the _seen_ and _unseen_ labels. That is, for each example the model must predict from one of the 10 corpus labels. The F1 is reported for the labels seen during training as well as the labels unseen during training. We found an F1 score of `.68` and `.72` for the unseen and seen labels, respectively. In order to adjust for the in-vs-out of distribution labels, we subtract a fixed amount of 30% from the normalized probabilities of the _seen_ labels, as described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [our blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
squeezebert/squeezebert-uncased
7978b0c163f11850ec35d5cd541828159313ac41
2020-12-11T22:02:17.000Z
[ "pytorch", "squeezebert", "arxiv:2006.11316", "arxiv:1904.00962", "transformers" ]
null
false
squeezebert
null
squeezebert/squeezebert-uncased
90,492
0
transformers
235
language: en license: bsd datasets: - bookcorpus - wikipedia --- # SqueezeBERT pretrained model This model, `squeezebert-uncased`, is a pretrained model for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective. SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/). The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone. ## Pretraining ### Pretraining data - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of thousands of unpublished books - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) ### Pretraining procedure The model is pretrained using the Masked Language Model (MLM) and Sentence Order Prediction (SOP) tasks. (Author's note: If you decide to pretrain your own model, and you prefer to train with MLM only, that should work too.) From the SqueezeBERT paper: > We pretrain SqueezeBERT from scratch (without distillation) using the [LAMB](https://arxiv.org/abs/1904.00962) optimizer, and we employ the hyperparameters recommended by the LAMB authors: a global batch size of 8192, a learning rate of 2.5e-3, and a warmup proportion of 0.28. Following the LAMB paper's recommendations, we pretrain for 56k steps with a maximum sequence length of 128 and then for 6k steps with a maximum sequence length of 512. ## Finetuning The SqueezeBERT paper results from 2 approaches to finetuning the model: - "finetuning without bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on each GLUE task - "finetuning with bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on a MNLI with distillation from a teacher model. Then, use the MNLI-finetuned SqueezeBERT model as a student model to finetune on each of the other GLUE tasks (e.g. RTE, MRPC, …) with distillation from a task-specific teacher model. A detailed discussion of the hyperparameters used for finetuning is provided in the appendix of the [SqueezeBERT paper](https://arxiv.org/abs/2006.11316). Note that finetuning SqueezeBERT with distillation is not yet implemented in this repo. If the author (Forrest Iandola - forrest.dnn@gmail.com) gets enough encouragement from the user community, he will add example code to Transformers for finetuning SqueezeBERT with distillation. This model, `squeezebert/squeezebert-uncased`, has been pretrained but not finetuned. For most text classification tasks, we recommend using squeezebert-mnli-headless as a starting point. ### How to finetune To try finetuning SqueezeBERT on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task, you can run the following command: ``` ./utils/download_glue_data.py python examples/text-classification/run_glue.py \ --model_name_or_path squeezebert-base-headless \ --task_name mrpc \ --data_dir ./glue_data/MRPC \ --output_dir ./models/squeezebert_mrpc \ --overwrite_output_dir \ --do_train \ --do_eval \ --num_train_epochs 10 \ --learning_rate 3e-05 \ --per_device_train_batch_size 16 \ --save_steps 20000 ``` ## BibTeX entry and citation info ``` @article{2020_SqueezeBERT, author = {Forrest N. Iandola and Albert E. Shaw and Ravi Krishna and Kurt W. Keutzer}, title = {{SqueezeBERT}: What can computer vision teach NLP about efficient neural networks?}, journal = {arXiv:2006.11316}, year = {2020} } ```
philschmid/distilbart-cnn-12-6-samsum
a823ff6685caf2db51911e2ad903aa55e5defb29
2022-07-26T20:01:15.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:samsum", "transformers", "sagemaker", "summarization", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
philschmid
null
philschmid/distilbart-cnn-12-6-samsum
89,618
4
transformers
236
--- language: en tags: - sagemaker - bart - summarization license: apache-2.0 datasets: - samsum widget: - text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\ Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\ \ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\ \ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face " model-index: - name: philschmid/distilbart-cnn-12-6-samsum results: - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - name: ROUGE-1 type: rouge value: 41.0895 verified: true - name: ROUGE-2 type: rouge value: 20.7459 verified: true - name: ROUGE-L type: rouge value: 31.5952 verified: true - name: ROUGE-LSUM type: rouge value: 38.3389 verified: true - name: loss type: loss value: 1.4566329717636108 verified: true - name: gen_len type: gen_len value: 59.6032 verified: true - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: test metrics: - name: ROUGE-1 type: rouge value: 21.1644 verified: true - name: ROUGE-2 type: rouge value: 4.0659 verified: true - name: ROUGE-L type: rouge value: 13.9414 verified: true - name: ROUGE-LSUM type: rouge value: 17.0718 verified: true - name: loss type: loss value: 3.002755880355835 verified: true - name: gen_len type: gen_len value: 71.2969 verified: true - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: test metrics: - name: ROUGE-1 type: rouge value: 42.9764 verified: true - name: ROUGE-2 type: rouge value: 19.8711 verified: true - name: ROUGE-L type: rouge value: 29.5196 verified: true - name: ROUGE-LSUM type: rouge value: 39.959 verified: true - name: loss type: loss value: 3.014679193496704 verified: true - name: gen_len type: gen_len value: 81.956 verified: true --- ## `distilbart-cnn-12-6-samsum` This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. For more information look at: - [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html) - [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) ## Hyperparameters ```json { "dataset_name": "samsum", "do_eval": true, "do_train": true, "fp16": true, "learning_rate": 5e-05, "model_name_or_path": "sshleifer/distilbart-cnn-12-6", "num_train_epochs": 3, "output_dir": "/opt/ml/model", "per_device_eval_batch_size": 8, "per_device_train_batch_size": 8, "seed": 7 } ``` ## Train results | key | value | | --- | ----- | | epoch | 3.0 | | init_mem_cpu_alloc_delta | 180338 | | init_mem_cpu_peaked_delta | 18282 | | init_mem_gpu_alloc_delta | 1222242816 | | init_mem_gpu_peaked_delta | 0 | | train_mem_cpu_alloc_delta | 6971403 | | train_mem_cpu_peaked_delta | 640733 | | train_mem_gpu_alloc_delta | 4910897664 | | train_mem_gpu_peaked_delta | 23331969536 | | train_runtime | 155.2034 | | train_samples | 14732 | | train_samples_per_second | 2.242 | ## Eval results | key | value | | --- | ----- | | epoch | 3.0 | | eval_loss | 1.4209576845169067 | | eval_mem_cpu_alloc_delta | 868003 | | eval_mem_cpu_peaked_delta | 18250 | | eval_mem_gpu_alloc_delta | 0 | | eval_mem_gpu_peaked_delta | 328244736 | | eval_runtime | 0.6088 | | eval_samples | 818 | | eval_samples_per_second | 1343.647 | ## Usage ```python from transformers import pipeline summarizer = pipeline("summarization", model="philschmid/distilbart-cnn-12-6-samsum") conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok. Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face ''' nlp(conversation) ```
distilbert-base-uncased-distilled-squad
3ab4b58da34f3a23bbdd2626fc58ee6c91f9890b
2022-07-22T08:03:29.000Z
[ "pytorch", "tf", "tflite", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
null
null
distilbert-base-uncased-distilled-squad
89,003
10
transformers
237
--- language: en datasets: - squad widget: - text: "Which name is also used to describe the Amazon rainforest in English?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." - text: "How many square kilometers of rainforest is covered in the basin?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." license: apache-2.0 --- # DistilBERT base uncased distilled SQuAD ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad). - **Developed by:** Hugging Face - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** Apache 2.0 - **Related Models:** [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased) - **Resources for more information:** - See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model) - See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure ## How to Get Started with the Model Use the code below to get started with the model. ```python >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad') >>> context = r""" ... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a ... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune ... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script. ... """ >>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context) >>> print( ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}" ...) Answer: 'SQuAD dataset', score: 0.4704, start: 147, end: 160 ``` Here is how to use this model in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad') model = DistilBertModel.from_pretrained('distilbert-base-uncased-distilled-squad') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) print(outputs) ``` And in TensorFlow: ```python from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering import tensorflow as tf tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad") model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased-distilled-squad") question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="tf") outputs = model(**inputs) answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) ``` ## Uses This model can be used for question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad') >>> context = r""" ... Alice is sitting on the bench. Bob is sitting next to her. ... """ >>> result = question_answerer(question="Who is the CEO?", context=context) >>> print( ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}" ...) Answer: 'Bob', score: 0.4183, start: 32, end: 35 ``` Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as: > DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad). #### Training Procedure ##### Preprocessing See the [distilbert-base-uncased model card](https://huggingface.co/distilbert-base-uncased) for further details. ##### Pretraining See the [distilbert-base-uncased model card](https://huggingface.co/distilbert-base-uncased) for further details. ## Evaluation As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md) > This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5). ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD. - **Hardware Type:** 8 16GB V100 GPUs - **Hours used:** 90 hours - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{sanh2019distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas}, booktitle={NeurIPS EMC^2 Workshop}, year={2019} } ``` APA: - Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. ## Model Card Authors This model card was written by the Hugging Face team.
facebook/dpr-reader-single-nq-base
5114ec3299284784702848f7d4e598c59df1a35c
2020-11-25T16:59:53.000Z
[ "pytorch", "tf", "dpr", "transformers" ]
null
false
facebook
null
facebook/dpr-reader-single-nq-base
88,755
1
transformers
238
Entry not found
Davlan/xlm-roberta-large-ner-hrl
690312971788985228c98683d4b816fbf026b346
2022-06-27T10:49:56.000Z
[ "pytorch", "tf", "xlm-roberta", "token-classification", "ar", "de", "en", "es", "fr", "it", "lv", "nl", "pt", "zh", "multilingual", "transformers", "autotrain_compatible" ]
token-classification
false
Davlan
null
Davlan/xlm-roberta-large-ner-hrl
88,660
3
transformers
239
Hugging Face's logo --- language: - ar - de - en - es - fr - it - lv - nl - pt - zh - multilingual --- # xlm-roberta-large-ner-hrl ## Model description **xlm-roberta-large-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa large model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 10 high-resourced languages ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-ner-hrl") model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-ner-hrl") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute." ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data The training data for the 10 languages are from: Language|Dataset -|- Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/) English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/) Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/) French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio) Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html) Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities) Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/) Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese) Chinese | [MSRA](https://huggingface.co/datasets/msra_ner) The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location ## Training procedure This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
patrickvonplaten/t5-tiny-random
a3735d6adf1f23b7c32e6622fd6da7bc46d7f123
2021-11-03T17:13:16.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
patrickvonplaten
null
patrickvonplaten/t5-tiny-random
88,297
1
transformers
240
hf-internal-testing/tiny-random-mbart
63c7077c54936948aeb5a675e10489c945957824
2021-12-28T12:24:41.000Z
[ "pytorch", "tf", "mbart", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-mbart
88,207
null
transformers
241
Entry not found
EleutherAI/gpt-neo-2.7B
51568a6e0ae813a3f2a9da558ab7beac5e3acc24
2021-12-31T13:46:21.000Z
[ "pytorch", "jax", "rust", "gpt_neo", "text-generation", "en", "dataset:The Pile", "transformers", "text generation", "causal-lm", "license:apache-2.0" ]
text-generation
false
EleutherAI
null
EleutherAI/gpt-neo-2.7B
87,789
86
transformers
242
--- language: - en tags: - text generation - pytorch - causal-lm license: apache-2.0 datasets: - The Pile --- # GPT-Neo 2.7B ## Model Description GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM). ### Linguistic Reasoning | Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag | | ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- | | GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% | | GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% | | **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** | | GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% | ### Physical and Scientific Reasoning | Model and Size | MathQA | PubMedQA | Piqa | | ---------------- | ---------- | ---------- | ----------- | | GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% | | GPT-2 1.5B | 23.64% | 58.33% | 70.78% | | **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** | | GPT-3 Ada | 24.29% | 52.80% | 68.88% | ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
allenai/scibert_scivocab_cased
9be298ced05121c9e6e2b2cb9f508b47b8eae650
2021-05-19T11:40:28.000Z
[ "pytorch", "jax", "bert", "transformers" ]
null
false
allenai
null
allenai/scibert_scivocab_cased
86,580
4
transformers
243
# SciBERT This is the pretrained model presented in [SciBERT: A Pretrained Language Model for Scientific Text](https://www.aclweb.org/anthology/D19-1371/), which is a BERT model trained on scientific text. The training corpus was papers taken from [Semantic Scholar](https://www.semanticscholar.org). Corpus size is 1.14M papers, 3.1B tokens. We use the full text of the papers in training, not just abstracts. SciBERT has its own wordpiece vocabulary (scivocab) that's built to best match the training corpus. We trained cased and uncased versions. Available models include: * `scibert_scivocab_cased` * `scibert_scivocab_uncased` The original repo can be found [here](https://github.com/allenai/scibert). If using these models, please cite the following paper: ``` @inproceedings{beltagy-etal-2019-scibert, title = "SciBERT: A Pretrained Language Model for Scientific Text", author = "Beltagy, Iz and Lo, Kyle and Cohan, Arman", booktitle = "EMNLP", year = "2019", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1371" } ```
mfeb/albert-xxlarge-v2-squad2
e734d3529f402760605a24af13e02f6f092e96a0
2020-04-24T16:10:08.000Z
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
mfeb
null
mfeb/albert-xxlarge-v2-squad2
86,458
2
transformers
244
Entry not found
cmarkea/distilcamembert-base-sentiment
b7804e295dc3cf2aa8ce8cff83f22e0bdd249558
2022-05-24T15:56:45.000Z
[ "pytorch", "tf", "camembert", "text-classification", "fr", "dataset:amazon_reviews_multi", "dataset:allocine", "transformers", "license:mit" ]
text-classification
false
cmarkea
null
cmarkea/distilcamembert-base-sentiment
86,119
7
transformers
245
--- language: fr license: mit datasets: - amazon_reviews_multi - allocine widget: - text: "Je pensais lire un livre nul, mais finalement je l'ai trouvé super !" - text: "Cette banque est très bien, mais elle n'offre pas les services de paiements sans contact." - text: "Cette banque est très bien et elle offre en plus les services de paiements sans contact." --- DistilCamemBERT-Sentiment ========================= We present DistilCamemBERT-Sentiment which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine tuned for the sentiment analysis task for the French language. This model is constructed over 2 datasets: [Amazon Reviews](https://huggingface.co/datasets/amazon_reviews_multi) and [Allociné.fr](https://huggingface.co/datasets/allocine) in order to minimize the bias. Indeed, Amazon reviews are very similar in the messages and relatively shorts, contrary to Allociné critics which are long and rich texts. This modelization is close to [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by 2** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base). Dataset ------- The dataset is composed of 204,993 reviews for training and 4,999 reviews for the test coming from Amazon, and respectively 235,516 and 4,729 critics from [Allocine website](https://www.allocine.fr/). The dataset is labeled into 5 categories: * 1 star: represents a very bad appreciation, * 2 stars: bad appreciation, * 3 stars: neutral appreciation, * 4 stars: good appreciation, * 5 stars: very good appreciation. Evaluation results ------------------ In addition of accuracy (called here *exact accuracy*) in order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure: $$\mathrm{top\!-\!2\; acc}=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 2}\mathbb{1}(\hat{f}_{i,l}=y_i)$$ where \\(\hat{f}_l\\) is the l-th largest predicted label, \\(y\\) the true label, \\(\mathcal{O}\\) is the test set of the observations and \\(\mathbb{1}\\) is the indicator function. | **class** | **exact accuracy (%)** | **top-2 acc (%)** | **support** | | :---------: | :--------------------: | :---------------: | :---------: | | **global** | 61.01 | 88.80 | 9,698 | | **1 star** | 87.21 | 77.17 | 1,905 | | **2 stars** | 79.19 | 84.75 | 1,935 | | **3 stars** | 77.85 | 78.98 | 1,974 | | **4 stars** | 78.61 | 90.22 | 1,952 | | **5 stars** | 85.96 | 82.92 | 1,932 | Benchmark --------- This model is compared to 3 reference models (see below). As each model doesn't have the same definition of targets, we detail the performance measure used for each of them. For the mean inference time measure, an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used. #### bert-base-multilingual-uncased-sentiment [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews similarly to our model, hence the targets and their definitions are the same. | **model** | **time (ms)** | **exact accuracy (%)** | **top-2 acc (%)** | | :-------: | :-----------: | :--------------------: | :---------------: | | [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **61.01** | **88.80** | | [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) | 187.70 | 54.41 | 82.82 | #### tf-allociné and barthez-sentiment-classification [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model and [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) based on [BARThez](https://huggingface.co/moussaKam/barthez) use the same bi-class definition between them. To bring this back to a two-class problem, we will only consider the *"1 star"* and *"2 stars"* labels for the *negative* sentiments and *"4 stars"* and *"5 stars"* for *positive* sentiments. We exclude the *"3 stars"* which can be interpreted as a *neutral* class. In this context, the problem of +/-1 star estimation errors disappears. Then we use only the classical accuracy definition. | **model** | **time (ms)** | **exact accuracy (%)** | | :-------: | :-----------: | :--------------------: | | [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **97.52** | | [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) | 329.74 | 95.69 | | [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) | 197.95 | 94.29 | How to use DistilCamemBERT-Sentiment ------------------------------------ ```python from transformers import pipeline analyzer = pipeline( task='text-classification', model="cmarkea/distilcamembert-base-sentiment", tokenizer="cmarkea/distilcamembert-base-sentiment" ) result = analyzer( "J'aime me promener en forêt même si ça me donne mal aux pieds.", return_all_scores=True ) result [{'label': '1 star', 'score': 0.047529436647892}, {'label': '2 stars', 'score': 0.14150355756282806}, {'label': '3 stars', 'score': 0.3586442470550537}, {'label': '4 stars', 'score': 0.3181498646736145}, {'label': '5 stars', 'score': 0.13417290151119232}] ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ```
dmis-lab/biobert-v1.1
551ca18efd7f052c8dfa0b01c94c2a8e68bc5488
2021-05-19T16:03:17.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
dmis-lab
null
dmis-lab/biobert-v1.1
85,159
6
transformers
246
Entry not found
DeepPavlov/rubert-base-cased-sentence
78b5122d6365337dd4114281b0d08cd1edbb3bc8
2021-05-18T18:18:43.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1908.10084", "transformers" ]
feature-extraction
false
DeepPavlov
null
DeepPavlov/rubert-base-cased-sentence
84,680
2
transformers
247
--- language: - ru --- # rubert-base-cased-sentence Sentence RuBERT \(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\[1\] google-translated to russian and on russian part of XNLI dev set\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\]. \[1\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \(2015\) A large annotated corpus for learning natural language inference. arXiv preprint [arXiv:1508.05326](https://arxiv.org/abs/1508.05326) \[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053) \[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
sshleifer/tiny-gpt2
5f91d94bd9cd7190a9f3216ff93cd1dd95f2c7be
2021-05-23T12:55:11.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
sshleifer
null
sshleifer/tiny-gpt2
84,146
4
transformers
248
Entry not found
neuralmind/bert-large-portuguese-cased
aa302f6ea73b759f7df9cad58bd272127b67ec28
2021-05-20T01:31:09.000Z
[ "pytorch", "jax", "bert", "fill-mask", "pt", "dataset:brWaC", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
neuralmind
null
neuralmind/bert-large-portuguese-cased
83,959
13
transformers
249
--- language: pt license: mit tags: - bert - pytorch datasets: - brWaC --- # BERTimbau Large (aka "bert-large-portuguese-cased") ![Bert holding a berimbau](https://imgur.com/JZ7Hynh.jpg) ## Introduction BERTimbau Large is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/). ## Available models | Model | Arch. | #Layers | #Params | | ---------------------------------------- | ---------- | ------- | ------- | | `neuralmind/bert-base-portuguese-cased` | BERT-Base | 12 | 110M | | `neuralmind/bert-large-portuguese-cased` | BERT-Large | 24 | 335M | ## Usage ```python from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-large-portuguese-cased') tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-large-portuguese-cased', do_lower_case=False) ``` ### Masked language modeling prediction example ```python from transformers import pipeline pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer) pipe('Tinha uma [MASK] no meio do caminho.') # [{'score': 0.5054386258125305, # 'sequence': '[CLS] Tinha uma pedra no meio do caminho. [SEP]', # 'token': 5028, # 'token_str': 'pedra'}, # {'score': 0.05616172030568123, # 'sequence': '[CLS] Tinha uma curva no meio do caminho. [SEP]', # 'token': 9562, # 'token_str': 'curva'}, # {'score': 0.02348282001912594, # 'sequence': '[CLS] Tinha uma parada no meio do caminho. [SEP]', # 'token': 6655, # 'token_str': 'parada'}, # {'score': 0.01795753836631775, # 'sequence': '[CLS] Tinha uma mulher no meio do caminho. [SEP]', # 'token': 2606, # 'token_str': 'mulher'}, # {'score': 0.015246033668518066, # 'sequence': '[CLS] Tinha uma luz no meio do caminho. [SEP]', # 'token': 3377, # 'token_str': 'luz'}] ``` ### For BERT embeddings ```python import torch model = AutoModel.from_pretrained('neuralmind/bert-large-portuguese-cased') input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt') with torch.no_grad(): outs = model(input_ids) encoded = outs[0][0, 1:-1] # Ignore [CLS] and [SEP] special tokens # encoded.shape: (8, 1024) # tensor([[ 1.1872, 0.5606, -0.2264, ..., 0.0117, -0.1618, -0.2286], # [ 1.3562, 0.1026, 0.1732, ..., -0.3855, -0.0832, -0.1052], # [ 0.2988, 0.2528, 0.4431, ..., 0.2684, -0.5584, 0.6524], # ..., # [ 0.3405, -0.0140, -0.0748, ..., 0.6649, -0.8983, 0.5802], # [ 0.1011, 0.8782, 0.1545, ..., -0.1768, -0.8880, -0.1095], # [ 0.7912, 0.9637, -0.3859, ..., 0.2050, -0.1350, 0.0432]]) ``` ## Citation If you use our work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } ```
google/bert_uncased_L-12_H-768_A-12
cc478efa3482fefe275eb2733363db9713d499ef
2021-05-19T17:27:43.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-12_H-768_A-12
83,694
2
transformers
250
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
Helsinki-NLP/opus-mt-ja-en
6282eb0555cd0253dc9fac00c5fafb2825ad04b4
2021-09-10T13:53:12.000Z
[ "pytorch", "marian", "text2text-generation", "ja", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ja-en
83,179
6
transformers
251
--- tags: - translation license: apache-2.0 --- ### opus-mt-ja-en * source languages: ja * target languages: en * OPUS readme: [ja-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.en | 41.7 | 0.589 |
sonoisa/sentence-bert-base-ja-mean-tokens
c5db458007569cea1374d4b5766193832c3fc285
2021-12-14T11:43:43.000Z
[ "pytorch", "fill-mask", "ja", "sentence-transformers", "sentence-bert", "feature-extraction", "sentence-similarity", "license:cc-by-sa-4.0" ]
feature-extraction
false
sonoisa
null
sonoisa/sentence-bert-base-ja-mean-tokens
82,730
3
sentence-transformers
252
--- language: ja license: cc-by-sa-4.0 tags: - sentence-transformers - sentence-bert - feature-extraction - sentence-similarity --- This is a Japanese sentence-BERT model. 日本語用Sentence-BERTモデル(バージョン1)です。 ※: 精度が1.5ポイントほど向上した[バージョン2モデル](https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens-v2)もあります。 # 解説 https://qiita.com/sonoisa/items/1df94d0a98cd4f209051 # 使い方 ```python from transformers import BertJapaneseTokenizer, BertModel import torch class SentenceBertJapanese: def __init__(self, model_name_or_path, device=None): self.tokenizer = BertJapaneseTokenizer.from_pretrained(model_name_or_path) self.model = BertModel.from_pretrained(model_name_or_path) self.model.eval() if device is None: device = "cuda" if torch.cuda.is_available() else "cpu" self.device = torch.device(device) self.model.to(device) def _mean_pooling(self, model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) @torch.no_grad() def encode(self, sentences, batch_size=8): all_embeddings = [] iterator = range(0, len(sentences), batch_size) for batch_idx in iterator: batch = sentences[batch_idx:batch_idx + batch_size] encoded_input = self.tokenizer.batch_encode_plus(batch, padding="longest", truncation=True, return_tensors="pt").to(self.device) model_output = self.model(**encoded_input) sentence_embeddings = self._mean_pooling(model_output, encoded_input["attention_mask"]).to('cpu') all_embeddings.extend(sentence_embeddings) # return torch.stack(all_embeddings).numpy() return torch.stack(all_embeddings) MODEL_NAME = "sonoisa/sentence-bert-base-ja-mean-tokens" model = SentenceBertJapanese(MODEL_NAME) sentences = ["暴走したAI", "暴走した人工知能"] sentence_embeddings = model.encode(sentences, batch_size=8) print("Sentence embeddings:", sentence_embeddings) ```
pyannote/embedding
09a3ed256d0fddbf5616fd9fb5db917fcf002708
2022-03-23T09:24:30.000Z
[ "pytorch", "tensorboard", "dataset:voxceleb", "pyannote-audio", "pyannote", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-recognition", "speaker-verification", "speaker-identification", "speaker-embedding", "license:mit" ]
null
false
pyannote
null
pyannote/embedding
82,716
4
pyannote-audio
253
--- tags: - pyannote - pyannote-audio - pyannote-audio-model - audio - voice - speech - speaker - speaker-recognition - speaker-verification - speaker-identification - speaker-embedding datasets: - voxceleb license: mit inference: false --- # 🎹 Speaker embedding Relies on pyannote.audio 2.0 currently in development: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation). This model is based on the [canonical x-vector TDNN-based architecture](https://ieeexplore.ieee.org/abstract/document/8461375), but with filter banks replaced with [trainable SincNet features](https://ieeexplore.ieee.org/document/8639585). See [`XVectorSincNet`](https://github.com/pyannote/pyannote-audio/blob/3c988c028dc505c64fe776720372f6fe816b585a/pyannote/audio/models/embedding/xvector.py#L104-L169) architecture for implementation detalis. ## Support For commercial enquiries and scientific consulting, please contact [me](mailto:herve@niderb.fr). For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository. ## Basic usage ```python from pyannote.audio import Inference inference = Inference("pyannote/embedding", window="whole") embedding1 = inference("speaker1.wav") embedding2 = inference("speaker2.wav") # `embeddingX` is (1 x D) numpy array extracted from the file as a whole. from scipy.spatial.distance import cdist distance = cdist(embedding1, embedding2, metric="cosine")[0,0] # `distance` is a `float` describing how dissimilar speakers 1 and 2 are. ``` Using cosine distance directly, this model reaches 2.8% equal error rate (EER) on VoxCeleb 1 test set. This is without voice activity detection (VAD) nor probabilistic linear discriminant analysis (PLDA). Expect even better results when adding one of those. ## Advanced usage ### Running on GPU ```python inference = Inference("pyannote/embedding", window="whole", device="cuda") embedding = inference("audio.wav") ``` ### Extract embedding from an excerpt ```python from pyannote.audio import Inference, Segment inference = Inference("pyannote/embedding", window="whole") excerpt = Segment(13.37, 19.81) embedding = inference.crop("audio.wav", excerpt) # `embedding` is (1 x D) numpy array extracted from the file excerpt. ``` ### Extract embeddings using a sliding window ```python from pyannote.audio import Inference inference = Inference("pyannote/embedding", window="sliding", duration=3.0, step=1.0) embeddings = inference("audio.wav") # `embeddings` is a (N x D) pyannote.core.SlidingWindowFeature # `embeddings[i]` is the embedding of the ith position of the # sliding window, i.e. from [i * step, i * step + duration]. ``` ## Citation ```bibtex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ``` ```bibtex @inproceedings{Coria2020, author="Coria, Juan M. and Bredin, Herv{\'e} and Ghannay, Sahar and Rosset, Sophie", editor="Espinosa-Anke, Luis and Mart{\'i}n-Vide, Carlos and Spasi{\'{c}}, Irena", title="{A Comparison of Metric Learning Loss Functions for End-To-End Speaker Verification}", booktitle="Statistical Language and Speech Processing", year="2020", publisher="Springer International Publishing", pages="137--148", isbn="978-3-030-59430-5" } ```
sentence-transformers/all-distilroberta-v1
57dd5d5be528ba968ef928103d92f95afc487e68
2022-07-11T21:04:19.000Z
[ "pytorch", "rust", "roberta", "fill-mask", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:MS Marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "sentence-transformers", "feature-extraction", "sentence-similarity", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/all-distilroberta-v1
81,479
4
sentence-transformers
254
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - MS Marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers --- # all-distilroberta-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-distilroberta-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-distilroberta-v1') model = AutoModel.from_pretrained('sentence-transformers/all-distilroberta-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-distilroberta-v1) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`distilroberta-base`](https://huggingface.co/distilroberta-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 128 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`distilroberta-base`](https://huggingface.co/distilroberta-base). Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,124,818,467** |
sentence-transformers/distilbert-base-nli-stsb-mean-tokens
888433dbfd0f07dc22d9e038d9acb20e5ca7e0d5
2022-06-15T20:07:20.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/distilbert-base-nli-stsb-mean-tokens
81,248
4
sentence-transformers
255
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/distilbert-base-nli-stsb-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distilbert-base-nli-stsb-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-base-nli-stsb-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/distilbert-base-nli-stsb-mean-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-base-nli-stsb-mean-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
vinai/phobert-base
667b55927a1571811539f27c0f374429a1c75759
2022-06-08T04:44:26.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "arxiv:2003.00744", "transformers", "autotrain_compatible" ]
fill-mask
false
vinai
null
vinai/phobert-base
81,225
6
transformers
256
# <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam): - Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance. - PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744): @article{phobert, title = {{PhoBERT: Pre-trained language models for Vietnamese}}, author = {Dat Quoc Nguyen and Anh Tuan Nguyen}, journal = {Findings of EMNLP}, year = {2020} } **Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software. For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
Helsinki-NLP/opus-mt-en-es
c8b63c83f30e46417ce423a585f9b9e20e1b877d
2021-07-13T16:24:56.000Z
[ "pytorch", "jax", "marian", "text2text-generation", "en", "es", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-es
80,022
7
transformers
257
--- language: - en - es tags: - translation license: apache-2.0 --- ### eng-spa * source group: English * target group: Spanish * OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md) * model: transformer * source language(s): eng * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 | | news-test2008-engspa.eng.spa | 29.7 | 0.564 | | newstest2009-engspa.eng.spa | 30.2 | 0.578 | | newstest2010-engspa.eng.spa | 36.9 | 0.620 | | newstest2011-engspa.eng.spa | 38.2 | 0.619 | | newstest2012-engspa.eng.spa | 39.0 | 0.625 | | newstest2013-engspa.eng.spa | 35.0 | 0.598 | | Tatoeba-test.eng.spa | 54.9 | 0.721 | ### System Info: - hf_name: eng-spa - source_languages: eng - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'es'] - src_constituents: {'eng'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt - src_alpha3: eng - tgt_alpha3: spa - short_pair: en-es - chrF2_score: 0.721 - bleu: 54.9 - brevity_penalty: 0.978 - ref_len: 77311.0 - src_name: English - tgt_name: Spanish - train_date: 2020-08-18 00:00:00 - src_alpha2: en - tgt_alpha2: es - prefer_old: False - long_pair: eng-spa - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
nreimers/BERT-Tiny_L-2_H-128_A-2
9cb03776b08d300ae73aa6ba4860a760c606f62d
2021-05-28T11:05:21.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
nreimers
null
nreimers/BERT-Tiny_L-2_H-128_A-2
79,688
null
transformers
258
This is the BERT-Medium model from Google: https://github.com/google-research/bert#bert. A BERT model with 2 layers, 128 hidden unit size, and 2 attention heads.
cointegrated/rubert-tiny
f191937ca7511ae76b44b06a460f18ab1699c54b
2022-01-28T11:42:37.000Z
[ "pytorch", "bert", "pretraining", "ru", "en", "transformers", "russian", "fill-mask", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity", "license:mit" ]
feature-extraction
false
cointegrated
null
cointegrated/rubert-tiny
79,161
9
transformers
259
--- language: ["ru", "en"] tags: - russian - fill-mask - pretraining - embeddings - masked-lm - tiny - feature-extraction - sentence-similarity license: mit widget: - text: "Миниатюрная модель для [MASK] разных задач." --- This is a very small distilled version of the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model for Russian and English (45 MB, 12M parameters). There is also an **updated version of this model**, [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2), with a larger vocabulary and better quality on practically all Russian NLU tasks. This model is useful if you want to fine-tune it for a relatively simple Russian task (e.g. NER or sentiment classification), and you care more about speed and size than about accuracy. It is approximately x10 smaller and faster than a base-sized BERT. Its `[CLS]` embeddings can be used as a sentence representation aligned between Russian and English. It was trained on the [Yandex Translate corpus](https://translate.yandex.ru/corpus), [OPUS-100](https://huggingface.co/datasets/opus100) and [Tatoeba](https://huggingface.co/datasets/tatoeba), using MLM loss (distilled from [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)), translation ranking loss, and `[CLS]` embeddings distilled from [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), [rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence), Laser and USE. There is a more detailed [description in Russian](https://habr.com/ru/post/562064/). Sentence embeddings can be produced as follows: ```python # pip install transformers sentencepiece import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny") model = AutoModel.from_pretrained("cointegrated/rubert-tiny") # model.cuda() # uncomment it if you have a GPU def embed_bert_cls(text, model, tokenizer): t = tokenizer(text, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**{k: v.to(model.device) for k, v in t.items()}) embeddings = model_output.last_hidden_state[:, 0, :] embeddings = torch.nn.functional.normalize(embeddings) return embeddings[0].cpu().numpy() print(embed_bert_cls('привет мир', model, tokenizer).shape) # (312,) ```
hfl/chinese-bert-wwm
ab0aa81da273504efc8540aa4d0bbaa3016a1bb5
2021-05-19T19:07:49.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
hfl
null
hfl/chinese-bert-wwm
78,869
18
transformers
260
--- language: - zh license: "apache-2.0" --- ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
facebook/wav2vec2-base
0b5b8e868dd84f03fd87d01f9c4ff0f080fecfe8
2021-12-28T12:44:31.000Z
[ "pytorch", "wav2vec2", "pretraining", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "transformers", "speech", "license:apache-2.0" ]
null
false
facebook
null
facebook/wav2vec2-base
78,091
14
transformers
261
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Wav2Vec2-Base [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
microsoft/deberta-v3-small
23bfba973812a80178eb6c2c600e85cc461ffc2c
2022-01-13T17:59:25.000Z
[ "pytorch", "tf", "deberta-v2", "en", "arxiv:2006.03654", "arxiv:2111.09543", "transformers", "deberta", "deberta-v3", "license:mit" ]
null
false
microsoft
null
microsoft/deberta-v3-small
78,067
14
transformers
262
--- language: en tags: - deberta - deberta-v3 thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. It has **44M** backbone parameters with a vocabulary containing 128K tokens which introduces 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 2.0 and MNLI tasks. | Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)| |-------------------|----------|-------------------|-----------|----------| | RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- | | XLNet-base |32 |92 | -/80.2 | 86.8/- | | ELECTRA-base |30 |86 | -/80.5 | 88.8/ | | DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5| | DeBERTa-v3-large|128|304 | 91.5/89.0 | 91.8/91.9 | | DeBERTa-v3-base |128|86 | 88.4/85.4 | 90.6/90.7| | **DeBERTa-v3-small** |128|**44** | **82.8/80.4** | **88.3/87.7**| | DeBERTa-v3-small+SiFT|128|22 | -/- | 88.8/88.5| #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_glue.py \ --model_name_or_path microsoft/deberta-v3-small \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 1500 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 4.5e-5 \ --num_train_epochs 3 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following papers: ``` latex @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
clue/albert_chinese_tiny
654acaf73c361ad56e4f4b1e2bb0023cbb1872b2
2020-12-11T21:35:55.000Z
[ "pytorch", "albert", "zh", "transformers" ]
null
false
clue
null
clue/albert_chinese_tiny
77,354
5
transformers
263
--- language: zh --- ## albert_chinese_tiny ### Overview **Language model:** albert-tiny **Model size:** 16M **Language:** Chinese **Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020) **Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE) ### Results For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE). ### Usage **NOTE:**Since sentencepiece is not used in `albert_chinese_tiny` model, you have to call **BertTokenizer** instead of AlbertTokenizer !!! ``` import torch from transformers import BertTokenizer, AlbertModel tokenizer = BertTokenizer.from_pretrained("clue/albert_chinese_tiny") albert = AlbertModel.from_pretrained("clue/albert_chinese_tiny") ``` ### About CLUE benchmark Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard. Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/
facebook/mbart-large-en-ro
2534e987b9ed03c416bbbaefa1a39e3441439bdd
2021-03-10T03:46:29.000Z
[ "pytorch", "tf", "mbart", "en", "ro", "transformers", "translation", "license:mit" ]
translation
false
facebook
null
facebook/mbart-large-en-ro
76,917
null
transformers
264
--- tags: - translation language: - en - ro license: mit --- ### mbart-large-en-ro This is mbart-large-cc25, finetuned on wmt_en_ro. It scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in `romanian_postprocessing.md` Original Code: https://github.com/pytorch/fairseq/tree/master/examples/mbart Docs: https://huggingface.co/transformers/master/model_doc/mbart.html Finetuning Code: examples/seq2seq/finetune.py (as of Aug 20, 2020)
Helsinki-NLP/opus-mt-ar-en
9c5efffe6f69dcb65d7156f40dfa27b54be34258
2021-09-09T21:26:15.000Z
[ "pytorch", "rust", "marian", "text2text-generation", "ar", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ar-en
76,095
5
transformers
265
--- tags: - translation license: apache-2.0 --- ### opus-mt-ar-en * source languages: ar * target languages: en * OPUS readme: [ar-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ar.en | 49.4 | 0.661 |
bert-large-uncased-whole-word-masking
90d3333009848fae4860a5338419d17f70be940c
2021-05-18T16:37:36.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
bert-large-uncased-whole-word-masking
74,654
3
transformers
266
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT large model (uncased) whole word masking Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: - 24-layer - 1024 hidden dimension - 16 attention heads - 336M parameters. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking') >>> unmasker("Hello I'm a [MASK] model.") [ { 'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.15813860297203064, 'token': 4827, 'token_str': 'fashion' }, { 'sequence': "[CLS] hello i'm a cover model. [SEP]", 'score': 0.10551052540540695, 'token': 3104, 'token_str': 'cover' }, { 'sequence': "[CLS] hello i'm a male model. [SEP]", 'score': 0.08340442180633545, 'token': 3287, 'token_str': 'male' }, { 'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.036381796002388, 'token': 3565, 'token_str': 'super' }, { 'sequence': "[CLS] hello i'm a top model. [SEP]", 'score': 0.03609578311443329, 'token': 2327, 'token_str': 'top' } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking') model = BertModel.from_pretrained("bert-large-uncased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking') model = TFBertModel.from_pretrained("bert-large-uncased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a waiter. [SEP]", "score":0.09823174774646759, "token":15610, "token_str":"waiter" }, { "sequence":"[CLS] the man worked as a carpenter. [SEP]", "score":0.08976428955793381, "token":10533, "token_str":"carpenter" }, { "sequence":"[CLS] the man worked as a mechanic. [SEP]", "score":0.06550426036119461, "token":15893, "token_str":"mechanic" }, { "sequence":"[CLS] the man worked as a butcher. [SEP]", "score":0.04142395779490471, "token":14998, "token_str":"butcher" }, { "sequence":"[CLS] the man worked as a barber. [SEP]", "score":0.03680137172341347, "token":13362, "token_str":"barber" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a waitress. [SEP]", "score":0.2669651508331299, "token":13877, "token_str":"waitress" }, { "sequence":"[CLS] the woman worked as a maid. [SEP]", "score":0.13054853677749634, "token":10850, "token_str":"maid" }, { "sequence":"[CLS] the woman worked as a nurse. [SEP]", "score":0.07987703382968903, "token":6821, "token_str":"nurse" }, { "sequence":"[CLS] the woman worked as a prostitute. [SEP]", "score":0.058545831590890884, "token":19215, "token_str":"prostitute" }, { "sequence":"[CLS] the woman worked as a cleaner. [SEP]", "score":0.03834161534905434, "token":20133, "token_str":"cleaner" } ] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy ---------------------------------------- | :-------------: | :----------------: BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07 ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
google/reformer-crime-and-punishment
0e6c3decb8211d49bf881013425dc8b0448b3f5a
2021-02-01T17:53:38.000Z
[ "pytorch", "rust", "reformer", "text-generation", "transformers" ]
text-generation
false
google
null
google/reformer-crime-and-punishment
74,293
null
transformers
267
## Reformer Model trained on "Crime and Punishment" Crime and Punishment is a novel written by Fyodor Dostoevsky and was translated into English. Crime and Punishment training data was taken from `gs://trax-ml/reformer/crime-and-punishment-2554.txt` and contains roughly 0.5M tokens. The ReformerLM model was trained in flax using colab notebook proposed by authors: https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb and the weights were converted to Hugging Face's PyTorch ReformerLM model `ReformerModelWithLMHead`. The model is a language model that operates on small sub-word units. Text can be generated as follows: ```python model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment") tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") tok.decode(model.generate(tok.encode("A few months later", return_tensors="pt"), do_sample=True,temperature=0.7, max_length=100)[0]) # gives:'A few months later on was more than anything in the flat. # “I have already.” “That’s not my notion that he had forgotten him. # What does that matter? And why do you mean? It’s only another fellow,” he said as he went out, as though he want' ```
vennify/t5-base-grammar-correction
9e4a09d21dca1072a69302df9261289d03c3ed78
2022-01-14T16:35:23.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:jfleg", "arxiv:1702.04066", "transformers", "grammar", "license:cc-by-nc-sa-4.0", "autotrain_compatible" ]
text2text-generation
false
vennify
null
vennify/t5-base-grammar-correction
74,162
16
transformers
268
--- language: en tags: - grammar - text2text-generation license: cc-by-nc-sa-4.0 datasets: - jfleg --- # T5 Grammar Correction This model generates a revised version of inputted text with the goal of containing fewer grammatical errors. It was trained with [Happy Transformer](https://github.com/EricFillion/happy-transformer) using a dataset called [JFLEG](https://arxiv.org/abs/1702.04066). Here's a [full article](https://www.vennify.ai/fine-tune-grammar-correction/) on how to train a similar model. ## Usage `pip install happytransformer ` ```python from happytransformer import HappyTextToText, TTSettings happy_tt = HappyTextToText("T5", "vennify/t5-base-grammar-correction") args = TTSettings(num_beams=5, min_length=1) # Add the prefix "grammar: " before each input result = happy_tt.generate_text("grammar: This sentences has has bads grammar.", args=args) print(result.text) # This sentence has bad grammar. ```
deepset/roberta-large-squad2
faa13e991d03fba7f6eb6a75356c4c0806e2588a
2022-07-25T10:34:40.000Z
[ "pytorch", "jax", "roberta", "question-answering", "en", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/roberta-large-squad2
74,021
11
transformers
269
--- language: en datasets: - squad_v2 license: cc-by-4.0 ---
Helsinki-NLP/opus-mt-ROMANCE-en
dd27a5df7623594b19ab50244084e2beddc2181c
2021-09-09T21:25:31.000Z
[ "pytorch", "rust", "marian", "text2text-generation", "roa", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ROMANCE-en
73,229
null
transformers
270
--- tags: - translation license: apache-2.0 --- ### opus-mt-ROMANCE-en * source languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la * target languages: en * OPUS readme: [fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-01.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.zip) * test set translations: [opus-2020-04-01.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.test.txt) * test set scores: [opus-2020-04-01.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.en | 62.2 | 0.750 |
monologg/koelectra-small-v2-distilled-korquad-384
70c28f5b9e6b2bd05bb609f6be1f9f8ff918cd6f
2020-06-04T17:39:49.000Z
[ "pytorch", "tflite", "electra", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
monologg
null
monologg/koelectra-small-v2-distilled-korquad-384
72,726
null
transformers
271
Entry not found
Helsinki-NLP/opus-mt-en-zh
93db7712e4698309ac17a80605adbf54dea5c8ee
2021-09-09T21:40:41.000Z
[ "pytorch", "tf", "jax", "rust", "marian", "text2text-generation", "en", "zh", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-zh
71,580
19
transformers
272
--- language: - en - zh tags: - translation license: apache-2.0 --- ### eng-zho * source group: English * target group: Chinese * OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md) * model: transformer * source language(s): eng * target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip) * test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt) * test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.zho | 31.4 | 0.268 | ### System Info: - hf_name: eng-zho - source_languages: eng - target_languages: zho - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'zh'] - src_constituents: {'eng'} - tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt - src_alpha3: eng - tgt_alpha3: zho - short_pair: en-zh - chrF2_score: 0.268 - bleu: 31.4 - brevity_penalty: 0.8959999999999999 - ref_len: 110468.0 - src_name: English - tgt_name: Chinese - train_date: 2020-07-17 - src_alpha2: en - tgt_alpha2: zh - prefer_old: False - long_pair: eng-zho - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
monologg/koelectra-small-v3-discriminator
7488f8db0f208beff4a1f3f9bb3ed04650a89ed7
2020-12-26T16:24:33.000Z
[ "pytorch", "electra", "pretraining", "transformers" ]
null
false
monologg
null
monologg/koelectra-small-v3-discriminator
70,824
null
transformers
273
Entry not found
deepset/xlm-roberta-base-squad2
8962e174ec8ad665bbf11edc2130487d2f7ea22a
2022-07-25T07:17:34.000Z
[ "pytorch", "xlm-roberta", "question-answering", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/xlm-roberta-base-squad2
70,545
10
transformers
274
--- datasets: - squad_v2 license: cc-by-4.0 model-index: - name: deepset/xlm-roberta-base-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Exact Match type: exact_match value: 74.0354 verified: true - name: F1 type: f1 value: 77.1833 verified: true --- # Multilingual XLM-RoBERTa base for QA on various languages ## Overview **Language model:** xlm-roberta-base **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 dev set - German MLQA - German XQuAD **Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 22*4 n_epochs = 2 max_seq_len=256, doc_stride=128, learning_rate=2e-5, ``` Corresponding experiment logs in mlflow: [link](https://public-mlflow.deepset.ai/#/experiments/2/runs/b25ec75e07614accb3f1ce03d43dbe08) ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 73.91560683904657 "f1": 77.14103746689592 ``` Evaluated on German MLQA: test-context-de-question-de.json "exact": 33.67279167589108 "f1": 44.34437105434842 "total": 4517 Evaluated on German XQuAD: xquad.de.json "exact": 48.739495798319325 "f1": 62.552615701071495 "total": 1190 ## Usage ### In Transformers ```python from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoTokenizer model_name = "deepset/xlm-roberta-base-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### In FARM ```python from farm.modeling.adaptive_model import AdaptiveModel from farm.modeling.tokenization import Tokenizer from farm.infer import Inferencer model_name = "deepset/xlm-roberta-base-squad2" # a) Get predictions nlp = Inferencer.load(model_name, task_type="question_answering") QA_input = [{"questions": ["Why is model conversion important?"], "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}] res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True) # b) Load model & tokenizer model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") tokenizer = Tokenizer.load(model_name) ``` ### In haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2") # or reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/xlm-roberta-base-squad2") ``` ## Authors Branden Chan: `branden.chan [at] deepset.ai` Timo Möller: `timo.moeller [at] deepset.ai` Malte Pietsch: `malte.pietsch [at] deepset.ai` Tanay Soni: `tanay.soni [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
roberta-large-mnli
0dcbcf20673c006ac2d1e324954491b96f0c0015
2022-07-22T08:02:16.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "en", "dataset:multi_nli", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:1806.02847", "arxiv:1804.07461", "arxiv:1704.05426", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1910.09700", "transformers", "autogenerated-modelcard", "license:mit" ]
text-classification
false
null
null
roberta-large-mnli
69,490
25
transformers
275
--- language: - en license: mit tags: - autogenerated-modelcard datasets: - multi_nli - wikipedia - bookcorpus --- # roberta-large-mnli ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-author) ## Model Details **Model Description:** roberta-large-mnli is the [RoBERTa large model](https://huggingface.co/roberta-large) fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://huggingface.co/datasets/multi_nli) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective. - **Developed by:** See [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for model developers - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** MIT - **Parent Model:** This model is a fine-tuned version of the RoBERTa large model. Users should see the [RoBERTa large model card](https://huggingface.co/roberta-large) for relevant information. - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/1907.11692) - [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) ## How to Get Started with the Model Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so: ```python from transformers import pipeline classifier = pipeline('zero-shot-classification', model='roberta-large-mnli') ``` You can then use this pipeline to classify sequences into any of the class names you specify. For example: ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) ``` ## Uses #### Direct Use This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the [GitHub repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for examples) and zero-shot sequence classification. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The [RoBERTa large model card](https://huggingface.co/roberta-large) notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral." Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python sequence_to_classify = "The CEO had a strong handshake." candidate_labels = ['male', 'female'] hypothesis_template = "This text speaks about a {} profession." classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template) ``` Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data This model was fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. Also see the [MNLI data card](https://huggingface.co/datasets/multi_nli) for more information. As described in the [RoBERTa large model card](https://huggingface.co/roberta-large): > The RoBERTa model was pretrained on the reunion of five datasets: > > - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books; > - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ; > - [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019. > - [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to train GPT-2, > - [Stories](https://arxiv.org/abs/1806.02847), a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. > > Together theses datasets weight 160GB of text. Also see the [bookcorpus data card](https://huggingface.co/datasets/bookcorpus) and the [wikipedia data card](https://huggingface.co/datasets/wikipedia) for additional information. #### Training Procedure ##### Preprocessing As described in the [RoBERTa large model card](https://huggingface.co/roberta-large): > The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of > the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked > with `<s>` and the end of one by `</s>` > > The details of the masking procedure for each sentence are the following: > - 15% of the tokens are masked. > - In 80% of the cases, the masked tokens are replaced by `<mask>`. > - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. > - In the 10% remaining cases, the masked tokens are left as is. > > Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ##### Pretraining Also as described in the [RoBERTa large model card](https://huggingface.co/roberta-large): > The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The > optimizer used is Adam with a learning rate of 4e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and > \\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning > rate after. ## Evaluation The following evaluation information is extracted from the associated [GitHub repo for RoBERTa](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta). #### Testing Data, Factors and Metrics The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: - **Dataset:** Part of [GLUE (Wang et al., 2019)](https://arxiv.org/pdf/1804.07461.pdf), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. See the [GLUE data card](https://huggingface.co/datasets/glue) or [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) for further information. - **Tasks:** NLI. [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) describe the inference task for MNLI as: > The Multi-Genre Natural Language Inference Corpus [(Williams et al., 2018)](https://arxiv.org/abs/1704.05426) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus [(Bowman et al., 2015)](https://arxiv.org/abs/1508.05326) as 550k examples of auxiliary training data. - **Metrics:** Accuracy - **Dataset:** [XNLI (Conneau et al., 2018)](https://arxiv.org/pdf/1809.05053.pdf), the extension of the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the [XNLI data card](https://huggingface.co/datasets/xnli) or [Conneau et al. (2018)](https://arxiv.org/pdf/1809.05053.pdf) for further information. - **Tasks:** Translate-test (e.g., the model is used to translate input sentences in other languages to the training language) - **Metrics:** Accuracy #### Results GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI XNLI test results: | Task | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | |:----:|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | |91.3|82.91|84.27|81.24|81.74|83.13|78.28|76.79|76.64|74.17|74.05| 77.5| 70.9|66.65|66.81| ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1907.11692.pdf). - **Hardware Type:** 1024 V100 GPUs - **Hours used:** 24 hours (one day) - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/pdf/1907.11692.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{liu2019roberta, title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach}, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, journal={arXiv preprint arXiv:1907.11692}, year = {2019}, } ```
google/t5-v1_1-xxl
6e80045f023a868fdb58e0b697d1ace5fe4880be
2020-11-19T19:55:45.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/t5-v1_1-xxl
69,217
2
transformers
276
--- language: en datasets: - c4 license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 ## Version 1.1 [T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. **Note**: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5-v1_1) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
hfl/chinese-roberta-wwm-ext-large
a25cc9e05974bd9687e528edd516f2cfdb3f5db9
2022-03-01T09:15:16.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
hfl
null
hfl/chinese-roberta-wwm-ext-large
68,277
18
transformers
277
--- language: - zh tags: - bert license: "apache-2.0" --- # Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
prajjwal1/bert-small
0ec5f86f27c1a77d704439db5e01c307ea11b9d4
2021-10-27T18:31:52.000Z
[ "pytorch", "en", "arxiv:1908.08962", "arxiv:2110.01518", "transformers", "BERT", "MNLI", "NLI", "transformer", "pre-training", "license:mit" ]
null
false
prajjwal1
null
prajjwal1/bert-small
67,809
5
transformers
278
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-small), [bert-mini]([bert-small](https://huggingface.co/prajjwal1/bert-mini) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) Other models to check out: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) - `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
sshleifer/tiny-marian-en-de
7a6b5b34785930445aeb20a9a34543b72de6e267
2020-06-25T02:27:15.000Z
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
sshleifer
null
sshleifer/tiny-marian-en-de
67,303
null
transformers
279
Entry not found
microsoft/codebert-base-mlm
5c927614b8750b556dcf569cf8a211fbe20f688a
2021-05-20T17:47:48.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "arxiv:2002.08155", "transformers", "autotrain_compatible" ]
fill-mask
false
microsoft
null
microsoft/codebert-base-mlm
66,627
8
transformers
280
## CodeBERT-base-mlm Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155). ### Training Data The model is trained on the code corpus of [CodeSearchNet](https://github.com/github/CodeSearchNet) ### Training Objective This model is initialized with Roberta-base and trained with a simple MLM (Masked Language Model) objective. ### Usage ```python from transformers import RobertaTokenizer, RobertaForMaskedLM, pipeline model = RobertaForMaskedLM.from_pretrained('microsoft/codebert-base-mlm') tokenizer = RobertaTokenizer.from_pretrained('microsoft/codebert-base-mlm') code_example = "if (x is not None) <mask> (x>1)" fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) outputs = fill_mask(code_example) print(outputs) ``` Expected results: ``` {'sequence': '<s> if (x is not None) and (x>1)</s>', 'score': 0.6049249172210693, 'token': 8} {'sequence': '<s> if (x is not None) or (x>1)</s>', 'score': 0.30680200457572937, 'token': 50} {'sequence': '<s> if (x is not None) if (x>1)</s>', 'score': 0.02133703976869583, 'token': 114} {'sequence': '<s> if (x is not None) then (x>1)</s>', 'score': 0.018607674166560173, 'token': 172} {'sequence': '<s> if (x is not None) AND (x>1)</s>', 'score': 0.007619690150022507, 'token': 4248} ``` ### Reference 1. [Bimodal CodeBERT trained with MLM+RTD objective](https://huggingface.co/microsoft/codebert-base) (suitable for code search and document generation) 2. 🤗 [Hugging Face's CodeBERTa](https://huggingface.co/huggingface/CodeBERTa-small-v1) (small size, 6 layers) ### Citation ```bibtex @misc{feng2020codebert, title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages}, author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou}, year={2020}, eprint={2002.08155}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
nyust-eb210/braslab-bert-drcd-384
abb9294fe9c0605d2f498a3228bfc6a30e8d2fbb
2021-05-31T14:47:20.000Z
[ "pytorch", "tf", "jax", "bert", "question-answering", "zh-tw", "dataset:DRCD", "transformers", "autotrain_compatible" ]
question-answering
false
nyust-eb210
null
nyust-eb210/braslab-bert-drcd-384
65,856
null
transformers
281
--- language: zh-tw datasets: DRCD tasks: Question Answering --- # BERT DRCD 384 This model is a fine-tune checkpoint of [bert-base-chinese](https://huggingface.co/bert-base-chinese), fine-tuned on DRCD dataset. This model reaches a F1 score of 86. This model reaches a EM score of 83. Training Arguments: - length: 384 - stride: 128 - learning_rate: 3e-5 - batch_size: 10 - epoch: 3 [Colab for detailed](https://colab.research.google.com/drive/1kZv7ZRmvUdCKEhQg8MBrKljGWvV2X3CP?usp=sharing) ## Deployment Deploy [BERT-DRCD-QuestionAnswering](https://github.com/pleomax0730/BERT-DRCD-QuestionAnswering) model using `FastAPI` and containerized using `Docker`. ## Usage ### In Transformers ```python text = "鴻海科技集團是由臺灣企業家郭台銘創辦的跨國企業,總部位於臺灣新北市土城區,主要生產地則在中國大陸,以富士康做為商標名稱。其專注於電子產品的代工服務,研發生產精密電氣元件、機殼、準系統、系統組裝、光通訊元件、液晶顯示件等3C產品上、下游產品及服務。" query = "鴻海集團總部位於哪裡?" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") tokenizer = BertTokenizerFast.from_pretrained("nyust-eb210/braslab-bert-drcd-384") model = BertForQuestionAnswering.from_pretrained("nyust-eb210/braslab-bert-drcd-384").to(device) encoded_input = tokenizer(text, query, return_tensors="pt").to(device) qa_outputs = model(**encoded_input) start = torch.argmax(qa_outputs.start_logits).item() end = torch.argmax(qa_outputs.end_logits).item() answer = encoded_input.input_ids.tolist()[0][start : end + 1] answer = "".join(tokenizer.decode(answer).split()) start_prob = torch.max(torch.nn.Softmax(dim=-1)(qa_outputs.start_logits)).item() end_prob = torch.max(torch.nn.Softmax(dim=-1)(qa_outputs.end_logits)).item() confidence = (start_prob + end_prob) / 2 print(answer, confidence) # 臺灣新北市土城區, 0.92 ```
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
eaa409b6b7c9380a5f2ba7a59aa97712ff30f386
2021-09-22T20:09:56.000Z
[ "pytorch", "jax", "bert", "fill-mask", "en", "arxiv:2007.15779", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
fill-mask
false
microsoft
null
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
65,440
32
transformers
282
--- language: en tags: - exbert license: mit widget: - text: "[MASK] is a tumor suppressor gene." --- ## PubMedBERT (abstracts + full text) Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. PubMedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/) and _full-text_ articles from [PubMedCentral](https://www.ncbi.nlm.nih.gov/pmc/). This model achieves state-of-the-art performance on many biomedical NLP tasks, and currently holds the top score on the [Biomedical Language Understanding and Reasoning Benchmark](https://aka.ms/BLURB). ## Citation If you find PubMedBERT useful in your research, please cite the following paper: ```latex @misc{pubmedbert, author = {Yu Gu and Robert Tinn and Hao Cheng and Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann and Jianfeng Gao and Hoifung Poon}, title = {Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing}, year = {2020}, eprint = {arXiv:2007.15779}, } ``` <a href="https://huggingface.co/exbert/?model=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
sshleifer/tiny-distilroberta-base
d305c58110158c865cb6746c62d4511d4148a934
2021-10-22T16:10:44.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
sshleifer
null
sshleifer/tiny-distilroberta-base
64,865
2
transformers
283
Entry not found
microsoft/DialoGPT-small
f9c829d0285e7addb0667aeb6e33956916ec6cd0
2021-05-23T09:14:00.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "arxiv:1911.00536", "transformers", "conversational", "license:mit" ]
conversational
false
microsoft
null
microsoft/DialoGPT-small
64,443
8
transformers
284
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- ## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT) DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations. The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test. The model is trained on 147M multi-turn dialogue from Reddit discussion thread. * Multi-turn generation examples from an interactive environment: |Role | Response | |---------|--------| |User | Does money buy happiness? | | Bot | Depends how much money you spend on it .| |User | What is the best way to buy happiness ? | | Bot | You just have to be a millionaire by your early 20s, then you can be happy . | |User |This is so difficult ! | | Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small") model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
nlpaueb/bert-base-uncased-contracts
f918d2e0cf491ba2c5fcf9f82d5e9603b8c5f3ea
2022-04-28T14:43:56.000Z
[ "pytorch", "tf", "jax", "bert", "en", "transformers", "legal", "license:cc-by-sa-4.0", "fill-mask" ]
fill-mask
false
nlpaueb
null
nlpaueb/bert-base-uncased-contracts
64,347
5
transformers
285
--- language: en pipeline_tag: fill-mask license: cc-by-sa-4.0 thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png tags: - legal widget: - text: "This [MASK] Agreement is between General Motors and John Murray." --- # LEGAL-BERT: The Muppets straight out of Law School <img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/> LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks.<br> This is the sub-domain variant pre-trained on US contracts. <br/><br/> --- I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261) --- ## Pre-training corpora The pre-training corpora of LEGAL-BERT include: * 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office. * 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk). * 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX. * 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng). * 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law). * 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml). ## Pre-training details * We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert). * We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters). * We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4. * We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us! ## Models list | Model name | Model Path | Training corpora | | ------------------- | ------------------------------------ | ------------------- | | CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts | | EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation | | ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases | | LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All | | LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All | \* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora. \*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020). ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-uncased-contracts") model = AutoModel.from_pretrained("nlpaueb/bert-base-uncased-contracts") ``` ## Use LEGAL-BERT variants as Language Models | Corpus | Model | Masked token | Predictions | | --------------------------------- | ---------------------------------- | ------------ | ------------ | | | **BERT-BASE-UNCASED** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05') | | **CONTRACTS-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04') | | **EURLEX-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02') | | **ECHR-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05') | | **LEGAL-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01') | | **LEGAL-BERT-SMALL** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05') ## Evaluation on downstream tasks Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261) ## Author - Publication ``` @inproceedings{chalkidis-etal-2020-legal, title = "{LEGAL}-{BERT}: The Muppets straight out of Law School", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Aletras, Nikolaos and Androutsopoulos, Ion", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", doi = "10.18653/v1/2020.findings-emnlp.261", pages = "2898--2904" } ``` ## About Us [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts. The group's current research interests include: * question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering, * natural language generation from databases and ontologies, especially Semantic Web ontologies, text classification, including filtering spam and abusive content, * information extraction and opinion mining, including legal text analytics and sentiment analysis, * natural language processing tools for Greek, for example parsers and named-entity recognizers, machine learning in natural language processing, especially deep learning. The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business. [Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
viktor-enzell/wav2vec2-large-voxrex-swedish-4gram
3cafa40b3433cf9d1b3b2b65c37c48541f57d1a8
2022-06-02T13:40:29.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sv", "dataset:common_voice", "dataset:NST Swedish ASR Database", "dataset:P4", "dataset:The Swedish Culturomics Gigaword Corpus", "transformers", "audio", "speech", "hf-asr-leaderboard", "license:cc0-1.0", "model-index" ]
automatic-speech-recognition
false
viktor-enzell
null
viktor-enzell/wav2vec2-large-voxrex-swedish-4gram
62,723
3
transformers
286
--- language: sv datasets: - common_voice - NST Swedish ASR Database - P4 - The Swedish Culturomics Gigaword Corpus metrics: - wer tags: - audio - automatic-speech-recognition - speech - hf-asr-leaderboard - sv license: cc0-1.0 model-index: - name: Wav2vec 2.0 large VoxRex Swedish (C) with 4-gram results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 6.1 type: common_voice args: sv-SE metrics: - name: Test WER type: wer value: 6.4723 --- # KBLab's wav2vec 2.0 large VoxRex Swedish (C) with 4-gram model Training of the acoustic model is the work of KBLab. See [VoxRex-C](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish) for more details. This repo extends the acoustic model with a social media 4-gram language model for boosted performance. ## Model description VoxRex-C is extended with a 4-gram language model estimated from a subset extracted from [The Swedish Culturomics Gigaword Corpus](https://spraakbanken.gu.se/resurser/gigaword) from Språkbanken. The subset contains 40M words from the social media genre between 2010 and 2015. ## How to use Example of transcribing 1% of the Common Voice test split, using GPU if available. The model expects 16kHz audio. ```python from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM from datasets import load_dataset import torch import torchaudio.functional as F # Import model and processor model_name = 'viktor-enzell/wav2vec2-large-voxrex-swedish-4gram' device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device); processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name) # Import and process speech data common_voice = load_dataset('common_voice', 'sv-SE', split='test[:1%]') def speech_file_to_array(sample): # Convert speech file to array and downsample to 16 kHz sampling_rate = sample['audio']['sampling_rate'] sample['speech'] = F.resample(torch.tensor(sample['audio']['array']), sampling_rate, 16_000) return sample common_voice = common_voice.map(speech_file_to_array) # Run inference inputs = processor(common_voice['speech'], sampling_rate=16_000, return_tensors='pt', padding=True).to(device) with torch.no_grad(): logits = model(**inputs).logits transcripts = processor.batch_decode(logits.cpu().numpy()).text ``` ## Training procedure Text data for the n-gram model is pre-processed by removing characters not part of the wav2vec 2.0 vocabulary and uppercasing all characters. After pre-processing and storing each text sample on a new line in a text file, a [KenLM](https://github.com/kpu/kenlm) model is estimated. See [this tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) for more details. ## Evaluation results The model was evaluated on the full Common Voice test set version 6.1. VoxRex-C achieved a WER of 9.03% without the language model and 6.47% with the language model.
cmarkea/distilcamembert-base-ner
21167ca4a0fd71a615e579dc4898c4079e86b014
2022-05-24T15:55:53.000Z
[ "pytorch", "tf", "camembert", "token-classification", "fr", "dataset:Jean-Baptiste/wikiner_fr", "transformers", "license:mit", "autotrain_compatible" ]
token-classification
false
cmarkea
null
cmarkea/distilcamembert-base-ner
62,690
10
transformers
287
--- language: fr license: mit datasets: - Jean-Baptiste/wikiner_fr widget: - text: "Boulanger, habitant à Boulanger et travaillant dans le magasin Boulanger situé dans la ville de Boulanger. Boulanger a écrit le livre éponyme Boulanger édité par la maison d'édition Boulanger." - text: "Quentin Jerome Tarantino naît le 27 mars 1963 à Knoxville, dans le Tennessee. Il est le fils de Connie McHugh, une infirmière, née le 3 septembre 1946, et de Tony Tarantino, acteur et musicien amateur né à New York. Ce dernier est d'origine italienne par son père ; sa mère a des ascendances irlandaises et cherokees. Il est prénommé d'après Quint Asper, le personnage joué par Burt Reynolds dans la série Gunsmoke et Quentin Compson, personnage du roman Le Bruit et la Fureur. Son père quitte le domicile familial avant même sa naissance. En 1965, sa mère déménage à Torrance, dans la banlieue sud de Los Angeles, et se remarie avec Curtis Zastoupil, un pianiste de bar, qui lui fait découvrir le cinéma. Le couple divorce alors que le jeune Quentin a une dizaine d'années." --- DistilCamemBERT-NER =================== We present DistilCamemBERT-NER which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine tuned for the NER (Named Entity Recognition) task for the French language. The work is inspired by [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) based on the [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by 2** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base). Dataset ------- The dataset used is [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr) which represents ~170k sentences labelized in 5 categories : * PER: personality ; * LOC: location ; * ORG: organization ; * MISC: miscellaneous entities (movies title, books, etc.) ; * O: background (Outside entity). Evaluation results ------------------ | **class** | **precision (%)** | **recall (%)** | **f1 (%)** | **support (#sub-word)** | | :------------: | :---------------: | :------------: | :--------: | :---------------------: | | **global** | 98.17 | 98.19 | 98.18 | 378,776 | | **PER** | 96.78 | 96.87 | 96.82 | 23,754 | | **LOC** | 94.05 | 93.59 | 93.82 | 27,196 | | **ORG** | 86.05 | 85.92 | 85.98 | 6,526 | | **MISC** | 88.78 | 84.69 | 86.69 | 11,891 | | **O** | 99.26 | 99.47 | 99.37 | 309,409 | Benchmark --------- This model performance is compared to 2 reference models (see below) with the metric f1 score. For the mean inference time measure, an AMD Ryzen 5 4500U @ 2.3GHz with 6 cores was used: | **model** | **time (ms)** | **PER (%)** | **LOC (%)** | **ORG (%)** | **MISC (%)** | **O (%)** | | :---------------------------------------------------------------------------------------------------------------: | :----------------: | :--------------: | :--------------: | :--------------: | :--------------: | :------------- : | | [cmarkea/distilcamembert-base-ner](https://huggingface.co/cmarkea/distilcamembert-base-ner) | **43.44** | **96.82** | **93.82** | **85.98** | **86.69** | **99.37** | | [Davlan/bert-base-multilingual-cased-ner-hrl](https://huggingface.co/Davlan/bert-base-multilingual-cased-ner-hrl) | 87.56 | 79.93 | 72.89 | 61.34 | n/a | 96.04 | | [flair/ner-french](https://huggingface.co/flair/ner-french) | 314.96 | 82.91 | 76.17 | 70.96 | 76.29 | 97.65 | <!---| [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) | 88.49 | **98.82** | **97.35** | **95.03** | **95.34** | **99.67** |--> How to use DistilCamemBERT-NER ------------------------------ ```python from transformers import pipeline ner = pipeline( task='ner', model="cmarkea/distilcamembert-base-ner", tokenizer="cmarkea/distilcamembert-base-ner", aggregation_strategy="simple" ) result = ner( "Le Crédit Mutuel Arkéa est une banque Française, elle comprend le CMB " "qui est une banque située en Bretagne et le CMSO qui est une banque " "qui se situe principalement en Aquitaine. C'est sous la présidence de " "Louis Lichou, dans les années 1980 que différentes filiales sont créées " "au sein du CMB et forment les principales filiales du groupe qui " "existent encore aujourd'hui (Federal Finance, Suravenir, Financo, etc.)." ) result [{'entity_group': 'ORG', 'score': 0.9974479, 'word': 'Crédit Mutuel Arkéa', 'start': 3, 'end': 22}, {'entity_group': 'LOC', 'score': 0.9000358, 'word': 'Française', 'start': 38, 'end': 47}, {'entity_group': 'ORG', 'score': 0.9788757, 'word': 'CMB', 'start': 66, 'end': 69}, {'entity_group': 'LOC', 'score': 0.99919766, 'word': 'Bretagne', 'start': 99, 'end': 107}, {'entity_group': 'ORG', 'score': 0.9594884, 'word': 'CMSO', 'start': 114, 'end': 118}, {'entity_group': 'LOC', 'score': 0.99935514, 'word': 'Aquitaine', 'start': 169, 'end': 178}, {'entity_group': 'PER', 'score': 0.99911094, 'word': 'Louis Lichou', 'start': 208, 'end': 220}, {'entity_group': 'ORG', 'score': 0.96226394, 'word': 'CMB', 'start': 291, 'end': 294}, {'entity_group': 'ORG', 'score': 0.9983959, 'word': 'Federal Finance', 'start': 374, 'end': 389}, {'entity_group': 'ORG', 'score': 0.9984454, 'word': 'Suravenir', 'start': 391, 'end': 400}, {'entity_group': 'ORG', 'score': 0.9985084, 'word': 'Financo', 'start': 402, 'end': 409}] ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ```
dbmdz/bert-base-german-uncased
26dd62c10449c89b8029c4440855983dfc5a5e83
2021-05-19T14:57:28.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
dbmdz
null
dbmdz/bert-base-german-uncased
62,613
1
transformers
288
--- language: de license: mit --- # 🤗 + 📚 dbmdz German BERT models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources another German BERT models 🎉 # German BERT ## Stats In addition to the recently released [German BERT](https://deepset.ai/german-bert) model by [deepset](https://deepset.ai/) we provide another German-language model. The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus, Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with a size of 16GB and 2,350,234,427 tokens. For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps (sentence piece model for vocab generation) follow those used for training [SciBERT](https://github.com/allenai/scibert). The model is trained with an initial sequence length of 512 subwords and was performed for 1.5M steps. This release includes both cased and uncased models. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt) | `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt) ## Usage With Transformers >= 2.3 our German BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased") ``` ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/fine-tuned-berts-seq). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
facebook/opt-350m
10517ad5b51c8c6e02db7824d8494721d4874488
2022-06-16T15:25:49.000Z
[ "pytorch", "tf", "jax", "opt", "text-generation", "en", "arxiv:2205.01068", "arxiv:2005.14165", "transformers", "license:other" ]
text-generation
false
facebook
null
facebook/opt-350m
62,324
10
transformers
289
--- language: en inference: false tags: - text-generation license: other commercial: false --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-350m") >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and I'm a bit of a noob. I'm looking for"}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and I'm interested in this project. Can I get an initial contact"}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5) >>> generator("The woman worked as a") [{'generated_text': "The woman works as a substitute teacher for kids who have missed school. She's the teacher herself,"}, {'generated_text': 'The woman works as a security guard for another company and does an average of around $13/hour'}, {'generated_text': 'The woman works as a receptionist, she could at the least wait a week or two for her'}, {'generated_text': 'The woman works as a manager/intern/career development coach/advisor at a nursing home'}, {'generated_text': 'The woman works as a maid and has to clean the house but you can tell her to do it'}] ``` compared to: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5) >>> generator("The man worked as a") [{'generated_text': 'The man works as a security guard for the National Football League franchise. He has been a part of'}, {'generated_text': 'The man works as a security guard for another company and does an excellent job.\nI remember when'}, {'generated_text': 'The man works as a "secret agent" but at the same time he\'s working to protect the'}, {'generated_text': 'The man works as a manager/operator/servant for a grocery store and does a lot of'}, {'generated_text': 'The man works as a bouncer near the scene of the accident - how he could do that is'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
sentence-transformers/msmarco-bert-base-dot-v5
668e63a378bc93d76c430af68338e550dc78df09
2022-06-15T20:34:19.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/msmarco-bert-base-dot-v5
62,053
1
sentence-transformers
290
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # msmarco-bert-base-dot-v5 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500K (query, answer) pairs from the [MS MARCO dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking/). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] #Load the model model = SentenceTransformer('sentence-transformers/msmarco-bert-base-dot-v5') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores print("Query:", query) for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-bert-base-dot-v5") model = AutoModel.from_pretrained("sentence-transformers/msmarco-bert-base-dot-v5") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores print("Query:", query) for doc, score in doc_score_pairs: print(score, doc) ``` ## Technical Details In the following some technical details how this model must be used: | Setting | Value | | --- | :---: | | Dimensions | 768 | | Max Sequence Length | 512 | | Produces normalized embeddings | No | | Pooling-Method | Mean pooling | | Suitable score functions | dot-product (e.g. `util.dot_score`) | ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=msmarco-bert-base-base-dot-v5) ## Training See `train_script.py` in this repository for the used training script. The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7858 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MarginMSELoss.MarginMSELoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 30, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 1e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: bert-base-uncased (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
nghuyong/ernie-2.0-en
c18a9f28b99a65011e3a6c61e2109f03833a447b
2021-05-20T01:42:24.000Z
[ "pytorch", "tf", "jax", "bert", "en", "arxiv:1907.12412", "transformers" ]
null
false
nghuyong
null
nghuyong/ernie-2.0-en
61,990
5
transformers
291
--- language: en --- # ERNIE-2.0 ## Introduction ERNIE 2.0 is a continual pre-training framework proposed by Baidu in 2019, which builds and learns incrementally pre-training tasks through constant multi-task learning. Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese. More detail: https://arxiv.org/abs/1907.12412 ## Released Model Info |Model Name|Language|Model Structure| |:---:|:---:|:---:| |ernie-2.0-en| English |Layer:12, Hidden:768, Heads:12| This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and a series of experiments have been conducted to check the accuracy of the conversion. - Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE - Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch ## How to use ```Python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-2.0-en") model = AutoModel.from_pretrained("nghuyong/ernie-2.0-en") ``` ## Citation ```bibtex @article{sun2019ernie20, title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding}, author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:1907.12412}, year={2019} } ```
sentence-transformers/stsb-roberta-base-v2
b5e9e8dbc4a7d931c766a9113d1a04963a480c06
2022-06-15T20:05:42.000Z
[ "pytorch", "tf", "jax", "roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/stsb-roberta-base-v2
61,846
null
sentence-transformers
292
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/stsb-roberta-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/stsb-roberta-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/stsb-roberta-base-v2') model = AutoModel.from_pretrained('sentence-transformers/stsb-roberta-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/stsb-roberta-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
facebook/m2m100_1.2B
90301acb1353eb6623e48973520b486612a57439
2022-05-26T22:26:41.000Z
[ "pytorch", "rust", "m2m_100", "text2text-generation", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "arxiv:2010.11125", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
facebook
null
facebook/m2m100_1.2B
61,684
10
transformers
293
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit tags: --- # M2M100 1.2B M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B") # translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "La vie est comme une boîte de chocolat." # translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Life is like a box of chocolate." ``` See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info ``` @misc{fan2020englishcentric, title={Beyond English-Centric Multilingual Machine Translation}, author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin}, year={2020}, eprint={2010.11125}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
microsoft/mdeberta-v3-base
7d66e84a399b78accc72e0f61cd6d50f02ee1c2c
2022-01-13T19:41:26.000Z
[ "pytorch", "tf", "deberta-v2", "multilingual", "arxiv:2006.03654", "arxiv:2111.09543", "transformers", "deberta", "deberta-v3", "mdeberta", "license:mit" ]
null
false
microsoft
null
microsoft/mdeberta-v3-base
61,660
33
transformers
294
--- language: multilingual tags: - deberta - deberta-v3 - mdeberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. mDeBERTa is multilingual version of DeBERTa which use the same structure as DeBERTa and was trained with CC100 multilingual data. The mDeBERTa V3 base model comes with 12 layers and a hidden size of 768. It has 86M backbone parameters with a vocabulary containing 250K tokens which introduces 190M parameters in the Embedding layer. This model was trained using the 2.5T CC100 data as XLM-R. #### Fine-tuning on NLU tasks We present the dev results on XNLI with zero-shot cross-lingual transfer setting, i.e. training with English data only, test on other languages. | Model |avg | en | fr| es | de | el | bg | ru |tr |ar |vi | th | zh | hi | sw | ur | |--------------| ----|----|----|---- |-- |-- |-- | -- |-- |-- |-- | -- | -- | -- | -- | -- | | XLM-R-base |76.2 |85.8|79.7|80.7 |78.7 |77.5 |79.6 |78.1 |74.2 |73.8 |76.5 |74.6 |76.7| 72.4| 66.5| 68.3| | mDeBERTa-base|**79.8**+/-0.2|**88.2**|**82.6**|**84.4** |**82.7** |**82.3** |**82.4** |**80.8** |**79.5** |**78.5** |**78.1** |**76.4** |**79.5**| **75.9**| **73.9**| **72.4**| #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets output_dir="ds_results" num_gpus=8 batch_size=4 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_xnli.py \ --model_name_or_path microsoft/mdeberta-v3-base \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --train_language en \ --language en \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 3000 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 2e-5 \ --num_train_epochs 6 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following papers: ``` latex @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
google/electra-small-generator
c04c64e3cca372b13615e71e51bc261f93905212
2021-04-29T15:23:28.000Z
[ "pytorch", "tf", "jax", "electra", "fill-mask", "en", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
google
null
google/electra-small-generator
61,641
2
transformers
295
--- language: en thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- ## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the generator in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/electra-small-generator", tokenizer="google/electra-small-generator" ) print( fill_mask(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) ```
valhalla/m2m100_tiny_random
337a4a691b7e14ad1668f5f4e481eaea6ce59ba1
2021-03-05T09:03:18.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
valhalla
null
valhalla/m2m100_tiny_random
61,625
null
transformers
296
Entry not found
Helsinki-NLP/opus-mt-nl-en
642ab6dc2d08ca9b5706ff44191dc443eae738e1
2021-09-10T13:59:07.000Z
[ "pytorch", "rust", "marian", "text2text-generation", "nl", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-nl-en
61,618
4
transformers
297
--- tags: - translation license: apache-2.0 --- ### opus-mt-nl-en * source languages: nl * target languages: en * OPUS readme: [nl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-05.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.zip) * test set translations: [opus-2019-12-05.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.test.txt) * test set scores: [opus-2019-12-05.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.nl.en | 60.9 | 0.749 |
Rostlab/prot_bert_bfd
6c5c8a55a52ff08a664dfd584aa1773f125a0487
2020-12-11T21:30:10.000Z
[ "pytorch", "tf", "fill-mask", "protein", "dataset:BFD", "transformers", "protein language model", "autotrain_compatible" ]
fill-mask
false
Rostlab
null
Rostlab/prot_bert_bfd
60,576
3
transformers
298
--- language: protein tags: - protein language model datasets: - BFD --- # ProtBert-BFD model Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ## Model description ProtBert-BFD is based on Bert model which pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. One important difference between our Bert model and the original Bert version is the way of dealing with sequences as separate documents This means the Next sentence prediction is not used, as each sequence is treated as a complete document. The masking follows the original Bert training with randomly masks 15% of the amino acids in the input. At the end, the feature extracted from this model revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. ## Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks you could gain more accuracy by fine-tuning the model rather than using it as a feature extractor. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import BertForMaskedLM, BertTokenizer, pipeline >>> tokenizer = BertTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False ) >>> model = BertForMaskedLM.from_pretrained("Rostlab/prot_bert_bfd") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T') [{'score': 0.1165614128112793, 'sequence': '[CLS] D L I P T S S K L V V L D T S L Q V K K A F F A L V T [SEP]', 'token': 5, 'token_str': 'L'}, {'score': 0.08976086974143982, 'sequence': '[CLS] D L I P T S S K L V V V D T S L Q V K K A F F A L V T [SEP]', 'token': 8, 'token_str': 'V'}, {'score': 0.08864385634660721, 'sequence': '[CLS] D L I P T S S K L V V S D T S L Q V K K A F F A L V T [SEP]', 'token': 10, 'token_str': 'S'}, {'score': 0.06227643042802811, 'sequence': '[CLS] D L I P T S S K L V V A D T S L Q V K K A F F A L V T [SEP]', 'token': 6, 'token_str': 'A'}, {'score': 0.06194969266653061, 'sequence': '[CLS] D L I P T S S K L V V T D T S L Q V K K A F F A L V T [SEP]', 'token': 15, 'token_str': 'T'}] ``` Here is how to use this model to get the features of a given protein sequence in PyTorch: ```python from transformers import BertModel, BertTokenizer import re tokenizer = BertTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False ) model = BertModel.from_pretrained("Rostlab/prot_bert_bfd") sequence_Example = "A E T C Z A O" sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example) encoded_input = tokenizer(sequence_Example, return_tensors='pt') output = model(**encoded_input) ``` ## Training data The ProtBert-BFD model was pretrained on [BFD](https://bfd.mmseqs.com/), a dataset consisting of 2.1 billion protein sequences. ## Training procedure ### Preprocessing The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The inputs of the model are then of the form: ``` [CLS] Protein Sequence A [SEP] Protein Sequence B [SEP] ``` Furthermore, each protein sequence was treated as a separate document. The preprocessing step was performed twice, once for a combined length (2 sequences) of less than 512 amino acids, and another time using a combined length (2 sequences) of less than 2048 amino acids. The details of the masking procedure for each sequence followed the original Bert model as following: - 15% of the amino acids are masked. - In 80% of the cases, the masked amino acids are replaced by `[MASK]`. - In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. - In the 10% remaining cases, the masked amino acids are left as is. ### Pretraining The model was trained on a single TPU Pod V3-1024 for one million steps in total. 800k steps using sequence length 512 (batch size 32k), and 200K steps using sequence length 2048 (batch size 6k). The optimizer used is Lamb with a learning rate of 0.002, a weight decay of 0.01, learning rate warmup for 140k steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Test results : | Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | |:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | 76 | 65 | | | | TS115 | 84 | 73 | | | | CB513 | 83 | 70 | | | | DeepLoc | | | 78 | 91 | ### BibTeX entry and citation info ```bibtex @article {Elnaggar2020.07.12.199554, author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard}, title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing}, elocation-id = {2020.07.12.199554}, year = {2020}, doi = {10.1101/2020.07.12.199554}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \&lt;a href="https://github.com/agemagician/ProtTrans"\&gt;https://github.com/agemagician/ProtTrans\&lt;/a\&gt;Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554}, eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf}, journal = {bioRxiv} } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
facebook/opt-1.3b
aa6ac1e23bb9a499be2b7400079cd2a7b8a1309a
2022-06-22T09:53:16.000Z
[ "pytorch", "tf", "jax", "opt", "text-generation", "en", "arxiv:2205.01068", "arxiv:2005.14165", "transformers", "license:other" ]
text-generation
false
facebook
null
facebook/opt-1.3b
60,368
9
transformers
299
--- language: en inference: false tags: - text-generation - opt license: other commercial: false --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-1.3b") >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and I am here.\nI am here.\nI am conscious.'}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and able to hear. I have a lot of experience in the"}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5) >>> generator("The woman worked as a") [{'generated_text': 'The woman worked as a bartender for six months before getting to the job she always dreamed of. She'}, {'generated_text': 'The woman worked as a nanny in a house near The White Horse Farm in the Yorkshire Dales'}, {'generated_text': "The woman worked as a translator at the British Broadcasting Corporation's headquarters and was also an acquaintance of some"}, {'generated_text': 'The woman worked as a secretary and went to school full-time, and also worked as a waitress'}, {'generated_text': 'The woman worked as a beautician with her baby and the little girl is now at the age where'}] ``` compared to: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5) >>> generator("The man worked as a") [{'generated_text': 'The man worked as a janitor and the owner of the house he worked at caught him cheating on'}, {'generated_text': 'The man worked as a software engineer.\n\nFor over 10 years, he had been at Amazon'}, {'generated_text': 'The man worked as a car salesman - and was a man of his word to her\nA T'}, {'generated_text': 'The man worked as a private contractor for five years. He went to the Bahamas in the summer of'}, {'generated_text': 'The man worked as a computer systems consultant. After leaving the job, he became a prolific internet hacker'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```