modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
unknown
card
stringlengths
51
438k
DARKVIP3R/DialoGPT-medium-Anakin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- tags: - conversational --- # Anakin Skywalker DialoGPT Model
DCU-NLP/bert-base-irish-cased-v1
[ "pytorch", "tf", "bert", "fill-mask", "transformers", "generated_from_keras_callback", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,244
null
--- tags: - generated_from_keras_callback model-index: - name: bert-base-irish-cased-v1 results: [] widget: - text: "Ceoltóir [MASK] ab ea Johnny Cash." --- # bert-base-irish-cased-v1 [gaBERT](https://aclanthology.org/2022.lrec-1.511/) is a BERT-base model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. ## Model description Encoder-based Transformer to be used to obtain features for finetuning for downstream tasks in Irish. ## Intended uses & limitations Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations. ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Framework versions - Transformers 4.20.1 - TensorFlow 2.9.1 - Datasets 2.3.2 - Tokenizers 0.12.1 ### BibTeX entry and citation info If you use this model in your research, please consider citing our paper: ``` @inproceedings{barry-etal-2022-gabert, title = "ga{BERT} {---} an {I}rish Language Model", author = "Barry, James and Wagner, Joachim and Cassidy, Lauren and Cowap, Alan and Lynn, Teresa and Walsh, Abigail and {\'O} Meachair, M{\'\i}che{\'a}l J. and Foster, Jennifer", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.511", pages = "4774--4788", abstract = "The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.", } ```
DCU-NLP/electra-base-irish-cased-discriminator-v1
[ "pytorch", "electra", "pretraining", "ga", "transformers", "irish", "license:apache-2.0" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: - ga license: apache-2.0 tags: - irish - electra widget: - text: "Ceoltóir [MASK] ab ea Johnny Cash." --- # gaELECTRA [gaELECTRA](https://aclanthology.org/2022.lrec-1.511/) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model. ### Limitations and bias Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations. ### BibTeX entry and citation info If you use this model in your research, please consider citing our paper: ``` @inproceedings{barry-etal-2022-gabert, title = "ga{BERT} {---} an {I}rish Language Model", author = "Barry, James and Wagner, Joachim and Cassidy, Lauren and Cowap, Alan and Lynn, Teresa and Walsh, Abigail and {\'O} Meachair, M{\'\i}che{\'a}l J. and Foster, Jennifer", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.511", pages = "4774--4788", abstract = "The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.", } ```
DCU-NLP/electra-base-irish-cased-generator-v1
[ "pytorch", "electra", "fill-mask", "ga", "transformers", "irish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - ga license: apache-2.0 tags: - irish - electra widget: - text: "Ceoltóir [MASK] ab ea Johnny Cash." --- # gaELECTRA [gaELECTRA](https://aclanthology.org/2022.lrec-1.511/) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model. ### Limitations and bias Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations. ### BibTeX entry and citation info If you use this model in your research, please consider citing our paper: ``` @inproceedings{barry-etal-2022-gabert, title = "ga{BERT} {---} an {I}rish Language Model", author = "Barry, James and Wagner, Joachim and Cassidy, Lauren and Cowap, Alan and Lynn, Teresa and Walsh, Abigail and {\'O} Meachair, M{\'\i}che{\'a}l J. and Foster, Jennifer", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.511", pages = "4774--4788", abstract = "The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.", } ```
DJSammy/bert-base-danish-uncased_BotXO-ai
[ "pytorch", "jax", "da", "dataset:common_crawl", "dataset:wikipedia", "transformers", "bert", "masked-lm", "license:cc-by-4.0", "fill-mask" ]
fill-mask
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: da tags: - bert - masked-lm license: cc-by-4.0 datasets: - common_crawl - wikipedia pipeline_tag: fill-mask widget: - text: "København er [MASK] i Danmark." --- # Danish BERT (uncased) model [BotXO.ai](https://www.botxo.ai/) developed this model. For data and training details see their [GitHub repository](https://github.com/botxo/nordic_bert). The original model was trained in TensorFlow then I converted it to Pytorch using [transformers-cli](https://huggingface.co/transformers/converting_tensorflow_models.html?highlight=cli). For TensorFlow version download here: https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1 ## Architecture ```python from transformers import AutoModelForPreTraining model = AutoModelForPreTraining.from_pretrained("DJSammy/bert-base-danish-uncased_BotXO,ai") params = list(model.named_parameters()) print('danish_bert_uncased_v2 has {:} different named parameters.\n'.format(len(params))) print('==== Embedding Layer ====\n') for p in params[0:5]: print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size())))) print('\n==== First Transformer ====\n') for p in params[5:21]: print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size())))) print('\n==== Last Transformer ====\n') for p in params[181:197]: print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size())))) print('\n==== Output Layer ====\n') for p in params[197:]: print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size())))) # danish_bert_uncased_v2 has 206 different named parameters. # ==== Embedding Layer ==== # bert.embeddings.word_embeddings.weight (32000, 768) # bert.embeddings.position_embeddings.weight (512, 768) # bert.embeddings.token_type_embeddings.weight (2, 768) # bert.embeddings.LayerNorm.weight (768,) # bert.embeddings.LayerNorm.bias (768,) # ==== First Transformer ==== # bert.encoder.layer.0.attention.self.query.weight (768, 768) # bert.encoder.layer.0.attention.self.query.bias (768,) # bert.encoder.layer.0.attention.self.key.weight (768, 768) # bert.encoder.layer.0.attention.self.key.bias (768,) # bert.encoder.layer.0.attention.self.value.weight (768, 768) # bert.encoder.layer.0.attention.self.value.bias (768,) # bert.encoder.layer.0.attention.output.dense.weight (768, 768) # bert.encoder.layer.0.attention.output.dense.bias (768,) # bert.encoder.layer.0.attention.output.LayerNorm.weight (768,) # bert.encoder.layer.0.attention.output.LayerNorm.bias (768,) # bert.encoder.layer.0.intermediate.dense.weight (3072, 768) # bert.encoder.layer.0.intermediate.dense.bias (3072,) # bert.encoder.layer.0.output.dense.weight (768, 3072) # bert.encoder.layer.0.output.dense.bias (768,) # bert.encoder.layer.0.output.LayerNorm.weight (768,) # bert.encoder.layer.0.output.LayerNorm.bias (768,) # ==== Last Transformer ==== # bert.encoder.layer.11.attention.self.query.weight (768, 768) # bert.encoder.layer.11.attention.self.query.bias (768,) # bert.encoder.layer.11.attention.self.key.weight (768, 768) # bert.encoder.layer.11.attention.self.key.bias (768,) # bert.encoder.layer.11.attention.self.value.weight (768, 768) # bert.encoder.layer.11.attention.self.value.bias (768,) # bert.encoder.layer.11.attention.output.dense.weight (768, 768) # bert.encoder.layer.11.attention.output.dense.bias (768,) # bert.encoder.layer.11.attention.output.LayerNorm.weight (768,) # bert.encoder.layer.11.attention.output.LayerNorm.bias (768,) # bert.encoder.layer.11.intermediate.dense.weight (3072, 768) # bert.encoder.layer.11.intermediate.dense.bias (3072,) # bert.encoder.layer.11.output.dense.weight (768, 3072) # bert.encoder.layer.11.output.dense.bias (768,) # bert.encoder.layer.11.output.LayerNorm.weight (768,) # bert.encoder.layer.11.output.LayerNorm.bias (768,) # ==== Output Layer ==== # bert.pooler.dense.weight (768, 768) # bert.pooler.dense.bias (768,) # cls.predictions.bias (32000,) # cls.predictions.transform.dense.weight (768, 768) # cls.predictions.transform.dense.bias (768,) # cls.predictions.transform.LayerNorm.weight (768,) # cls.predictions.transform.LayerNorm.bias (768,) # cls.seq_relationship.weight (2, 768) # cls.seq_relationship.bias (2,) ``` ## Example Pipeline ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='DJSammy/bert-base-danish-uncased_BotXO,ai') unmasker('København er [MASK] i Danmark.') # Copenhagen is the [MASK] of Denmark. # => # [{'score': 0.788068950176239, # 'sequence': '[CLS] københavn er hovedstad i danmark. [SEP]', # 'token': 12610, # 'token_str': 'hovedstad'}, # {'score': 0.07606703042984009, # 'sequence': '[CLS] københavn er hovedstaden i danmark. [SEP]', # 'token': 8108, # 'token_str': 'hovedstaden'}, # {'score': 0.04299738258123398, # 'sequence': '[CLS] københavn er metropol i danmark. [SEP]', # 'token': 23305, # 'token_str': 'metropol'}, # {'score': 0.008163209073245525, # 'sequence': '[CLS] københavn er ikke i danmark. [SEP]', # 'token': 89, # 'token_str': 'ikke'}, # {'score': 0.006238455418497324, # 'sequence': '[CLS] københavn er ogsa i danmark. [SEP]', # 'token': 25253, # 'token_str': 'ogsa'}] ```
DSI/human-directed-sentiment
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
** Human-Directed Sentiment Analysis in Arabic A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion.
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
[ "pytorch", "jax", "bert", "text-classification", "multilingual", "nl", "fr", "en", "arxiv:2104.09947", "transformers", "Tweets", "Sentiment analysis" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- language: - multilingual - nl - fr - en tags: - Tweets - Sentiment analysis widget: - text: "I really wish I could leave my house after midnight, this makes no sense!" --- # Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT [Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947) This model can be used to determine if a tweet expresses support or not for a curfew. The model was trained on manually labeled tweets from Belgium in Dutch, French and English. We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top). ![chart.png](https://github.com/iPieter/bert-corona-tweets/raw/master/chart.png) Models used in this paper are on HuggingFace: - https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support - https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
DTAI-KULeuven/mbert-corona-tweets-belgium-topics
[ "pytorch", "jax", "bert", "text-classification", "multilingual", "nl", "fr", "en", "arxiv:2104.09947", "transformers", "Dutch", "French", "English", "Tweets", "Topic classification" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
167
null
--- language: - multilingual - nl - fr - en tags: - Dutch - French - English - Tweets - Topic classification widget: - text: "I really can't wait for this lockdown to be over and go back to waking up early." --- # Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT [Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947) We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top). ![chart.png](https://github.com/iPieter/bert-corona-tweets/raw/master/chart.png) Models used in this paper are on HuggingFace: - https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support - https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
DTAI-KULeuven/robbertje-1-gb-bort
[ "pytorch", "roberta", "fill-mask", "nl", "dataset:oscar", "dataset:oscar (NL)", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2101.05716", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
"2021-07-08T12:37:59Z"
--- language: "nl" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT - RobBERTje license: mit datasets: - oscar - oscar (NL) - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven." --- <p align="center"> <img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%"> </p> # About RobBERTje RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case. We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates. # News - **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)! - **July 2, 2021**: Publicly released 4 RobBERTje models. - **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation! # The models | Model | Description | Parameters | Training size | Huggingface id | |--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------| | Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) | | Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) | | Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) | | BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | this model | # Results ## Intrinsic results We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution. | Model | PPPL | |-------------------|-----------| | RobBERT (teacher) | 7.76 | | Non-shuffled | 12.95 | | Shuffled | 18.74 | | Merged (p=0.5) | 17.10 | | BORT | 26.44 | ## Extrinsic results We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well. | Model | DBRD | DIE-DAT | NER | POS |SICK-NL | |------------------|-----------|-----------|-----------|-----------|----------| | RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 | | Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 | | Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 | | Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 | | BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
DTAI-KULeuven/robbertje-1-gb-merged
[ "pytorch", "roberta", "fill-mask", "nl", "dataset:oscar", "dataset:oscar (NL)", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2101.05716", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
"2021-07-08T11:47:52Z"
--- language: "nl" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT - RobBERTje license: mit datasets: - oscar - oscar (NL) - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven." --- <p align="center"> <img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%"> </p> # About RobBERTje RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case. We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates. # News - **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)! - **July 2, 2021**: Publicly released 4 RobBERTje models. - **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation! # The models | Model | Description | Parameters | Training size | Huggingface id | |--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------| | Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) | | Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) | | Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | this model | | BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) | # Results ## Intrinsic results We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution. | Model | PPPL | |-------------------|-----------| | RobBERT (teacher) | 7.76 | | Non-shuffled | 12.95 | | Shuffled | 18.74 | | Merged (p=0.5) | 17.10 | | BORT | 26.44 | ## Extrinsic results We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well. | Model | DBRD | DIE-DAT | NER | POS |SICK-NL | |------------------|-----------|-----------|-----------|-----------|----------| | RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 | | Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 | | Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 | | Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 | | BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
DTAI-KULeuven/robbertje-1-gb-non-shuffled
[ "pytorch", "roberta", "fill-mask", "nl", "dataset:oscar", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2101.05716", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
53
"2021-07-07T08:36:13Z"
--- language: "nl" thumbnail: "https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" tags: - Dutch - Flemish - RoBERTa - RobBERT - RobBERTje license: mit datasets: - oscar - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven." --- <p align="center"> <img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%"> </p> # About RobBERTje RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case. We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates. # News - **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)! - **July 2, 2021**: Publicly released 4 RobBERTje models. - **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation! # The models | Model | Description | Parameters | Training size | Huggingface id | |--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------| | Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | this model | | Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) | | Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) | | BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) | # Results ## Intrinsic results We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution. | Model | PPPL | |-------------------|-----------| | RobBERT (teacher) | 7.76 | | Non-shuffled | 12.95 | | Shuffled | 18.74 | | Merged (p=0.5) | 17.10 | | BORT | 26.44 | ## Extrinsic results We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well. | Model | DBRD | DIE-DAT | NER | POS |SICK-NL | |------------------|-----------|-----------|-----------|-----------|----------| | RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 | | Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 | | Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 | | Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 | | BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
DTAI-KULeuven/robbertje-1-gb-shuffled
[ "pytorch", "roberta", "fill-mask", "nl", "dataset:oscar", "dataset:oscar (NL)", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2101.05716", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
"2021-07-07T13:31:00Z"
--- language: "nl" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT - RobBERTje license: mit datasets: - oscar - oscar (NL) - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven." --- <p align="center"> <img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%"> </p> # About RobBERTje RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case. We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates. # News - **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)! - **July 2, 2021**: Publicly released 4 RobBERTje models. - **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation! # The models | Model | Description | Parameters | Training size | Huggingface id | |--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------| | Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) | | Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | this model | | Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) | | BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) | # Results ## Intrinsic results We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution. | Model | PPPL | |-------------------|-----------| | RobBERT (teacher) | 7.76 | | Non-shuffled | 12.95 | | Shuffled | 18.74 | | Merged (p=0.5) | 17.10 | | BORT | 26.44 | ## Extrinsic results We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well. | Model | DBRD | DIE-DAT | NER | POS |SICK-NL | |------------------|-----------|-----------|-----------|-----------|----------| | RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 | | Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 | | Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 | | Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 | | BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
alexandrainst/da-binary-emotion-classification-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,066
null
--- language: - da license: cc-by-sa-4.0 widget: - text: Der er et træ i haven. --- # Danish BERT for emotion detection The BERT Emotion model detects whether a Danish text is emotional or not. It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("alexandrainst/da-binary-emotion-classification-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-binary-emotion-classification-base") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
alexandrainst/da-emotion-classification-base
[ "pytorch", "tf", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
837
null
--- language: - da license: cc-by-sa-4.0 widget: - text: Jeg ejer en rød bil og det er en god bil. --- # Danish BERT for emotion classification The BERT Emotion model classifies a Danish text in one of the following class: * Glæde/Sindsro * Tillid/Accept * Forventning/Interrese * Overasket/Målløs * Vrede/Irritation * Foragt/Modvilje * Sorg/trist * Frygt/Bekymret It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data. This model should be used after detecting whether the text contains emotion or not, using the binary [BERT Emotion model](https://huggingface.co/alexandrainst/da-binary-emotion-classification-base). See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("alexandrainst/da-emotion-classification-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-emotion-classification-base") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
alexandrainst/da-hatespeech-classification-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
866
null
--- language: - da license: cc-by-sa-4.0 widget: - text: "Senile gamle idiot" --- # Danish BERT for hate speech classification The BERT HateSpeech model classifies offensive Danish text into 4 categories: * `Særlig opmærksomhed` (special attention, e.g. threat) * `Personangreb` (personal attack) * `Sprogbrug` (offensive language) * `Spam & indhold` (spam) This model is intended to be used after the [BERT HateSpeech detection model](https://huggingface.co/alexandrainst/da-hatespeech-detection-base). It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-classification-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-classification-base") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
alexandrainst/da-hatespeech-detection-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,719
null
--- language: - da license: cc-by-sa-4.0 widget: - text: "Senile gamle idiot" --- # Danish BERT for hate speech (offensive language) detection The BERT HateSpeech model detects whether a Danish text is offensive or not. It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-base") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
alexandrainst/da-ner-base
[ "pytorch", "tf", "bert", "token-classification", "da", "dataset:dane", "transformers", "license:cc-by-sa-4.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
78
null
--- language: - da license: cc-by-sa-4.0 datasets: - dane widget: - text: "Jens Peter Hansen kommer fra Danmark" --- # BERT fine-tuned for Named Entity Recognition in Danish The model tags tokens (in Danish sentences) with named entity tags (BIO format) [PER, ORG, LOC, MISC]. The pretrained language model used for fine-tuning is the [Danish BERT](https://github.com/certainlyio/nordic_bert) by BotXO. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ner.html#bert) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForTokenClassification model = BertForTokenClassification.from_pretrained("alexandrainst/da-ner-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-ner-base") ``` ## Training Data The model has been trained on the [DaNE](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane).
alexandrainst/da-sentiment-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "arxiv:1910.09700", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,432
null
--- language: - da license: cc-by-sa-4.0 widget: - text: Det er super godt --- # Model Card for Danish BERT Danish BERT Tone for sentiment polarity detection # Model Details ## Model Description The BERT Tone model detects sentiment polarity (positive, neutral or negative) in Danish texts. It has been finetuned on the pretrained Danish BERT model by BotXO. - **Developed by:** DaNLP - **Shared by [Optional]:** Hugging Face - **Model type:** Text Classification - **Language(s) (NLP):** Danish (da) - **License:** cc-by-sa-4.0 - **Related Models:** More information needed - **Parent Model:** BERT - **Resources for more information:** - [GitHub Repo](https://github.com/certainlyio/nordic_bert) - [Associated Documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone) # Uses ## Direct Use This model can be used for text classification ## Downstream Use [Optional] More information needed. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets. ## Training Procedure ### Preprocessing It has been finetuned on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO. ### Speeds, Sizes, Times More information needed. # Evaluation ## Testing Data, Factors & Metrics ### Testing Data More information needed. ### Factors ### Metrics F1 ## Results More information needed. # Model Examination More information needed. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed. - **Hours used:** More information needed. - **Cloud Provider:** More information needed. - **Compute Region:** More information needed. - **Carbon Emitted:** More information needed. # Technical Specifications [optional] ## Model Architecture and Objective More information needed. ## Compute Infrastructure More information needed. ### Hardware More information needed. ### Software More information needed. # Citation **BibTeX:** More information needed. **APA:** More information needed. # Glossary [optional] More information needed. # More Information [optional] More information needed. # Model Card Authors [optional] DaNLP in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed. # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("alexandrainst/da-sentiment-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-sentiment-base") ``` </details>
alexandrainst/da-subjectivivity-classification-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "dataset:DDSC/twitter-sent", "dataset:DDSC/europarl", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
846
null
--- language: - da license: cc-by-sa-4.0 datasets: - DDSC/twitter-sent - DDSC/europarl widget: - text: Jeg tror alligvel, det bliver godt --- # Danish BERT Tone for the detection of subjectivity/objectivity The BERT Tone model detects whether a text (in Danish) is subjective or objective. The model is based on the finetuning of the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("alexandrainst/da-subjectivivity-classification-base") tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-subjectivivity-classification-base") ``` ## Training data The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets.
alexandrainst/da-hatespeech-detection-small
[ "pytorch", "electra", "text-classification", "da", "transformers", "license:cc-by-4.0" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,506
null
--- language: - da license: cc-by-4.0 widget: - text: "Senile gamle idiot" --- # Danish ELECTRA for hate speech (offensive language) detection The ELECTRA Offensive model detects whether a Danish text is offensive or not. It is based on the pretrained [Danish Ælæctra](Maltehb/aelaectra-danish-electra-small-cased) model. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#electra) for more details. Here is how to use the model: ```python from transformers import ElectraTokenizer, ElectraForSequenceClassification model = ElectraForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-small") tokenizer = ElectraTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-small") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
alexandrainst/da-ned-base
[ "pytorch", "tf", "xlm-roberta", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- language: - da license: cc-by-sa-4.0 --- # XLM-Roberta fine-tuned for Named Entity Disambiguation Given a sentence and a knowledge graph context, the model detects whether a specific entity (represented by the knowledge graph context) is mentioned in the sentence (binary classification). The base language model used is the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). Here is how to use the model: ```python from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification model = XLMRobertaForSequenceClassification.from_pretrained("alexandrainst/da-ned-base") tokenizer = XLMRobertaTokenizer.from_pretrained("alexandrainst/da-ned-base") ``` The tokenizer takes 2 strings has input: the sentence and the knowledge graph (KG) context. Here is an example: ```python sentence = "Karen Blixen vendte tilbage til Danmark, hvor hun boede resten af sit liv på Rungstedlund, som hun arvede efter sin mor i 1939" kg_context = "udmærkelser modtaget Kritikerprisen udmærkelser modtaget Tagea Brandts Rejselegat udmærkelser modtaget Ingenio et arti udmærkelser modtaget Holbergmedaljen udmærkelser modtaget De Gyldne Laurbær mor Ingeborg Dinesen ægtefælle Bror von Blixen-Finecke køn kvinde Commons-kategori Karen Blixen LCAuth no95003722 VIAF 90663542 VIAF 121643918 GND-identifikator 118637878 ISNI 0000 0001 2096 6265 ISNI 0000 0003 6863 4408 ISNI 0000 0001 1891 0457 fødested Rungstedlund fødested Rungsted dødssted Rungstedlund dødssted København statsborgerskab Danmark NDL-nummer 00433530 dødsdato +1962-09-07T00:00:00Z dødsdato +1962-01-01T00:00:00Z fødselsdato +1885-04-17T00:00:00Z fødselsdato +1885-01-01T00:00:00Z AUT NKC jn20000600905 AUT NKC jo2015880827 AUT NKC xx0196181 emnets hovedkategori Kategori:Karen Blixen tilfælde af menneske billede Karen Blixen cropped from larger original.jpg IMDb-identifikationsnummer nm0227598 Freebase-ID /m/04ymd8w BNF 118857710 beskæftigelse skribent beskæftigelse selvbiograf beskæftigelse novelleforfatter ..." ``` A KG context, for a specific entity, can be generated from its Wikidata page. In the previous example, the KG context is a string representation of the Wikidata page of [Karen Blixen (QID=Q182804)](https://www.wikidata.org/wiki/Q182804). See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ned.html#xlmr) for more details about how to generate a KG context. ## Training Data The model has been trained on the [DaNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#daned) and [DaWikiNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dawikined) datasets.
Daivakai/DialoGPT-small-saitama
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - conversational --- #Saitama DialoGPT model
DanL/scientific-challenges-and-directions
[ "pytorch", "bert", "text-classification", "en", "dataset:DanL/scientific-challenges-and-directions-dataset", "arxiv:2108.13751", "transformers", "generated_from_trainer" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
134
"2022-01-09T15:13:44Z"
--- tags: - generated_from_trainer - text-classification language: - en datasets: - DanL/scientific-challenges-and-directions-dataset widget: - text: "severe atypical cases of pneumonia emerged and quickly spread worldwide." example_title: "challenge" - text: "we speculate that studying IL-6 will be beneficial." example_title: "direction" - text: "in future studies, both PRRs should be tested as the cause for multiple deaths." example_title: "both" - text: "IbMADS1-transformed potatoes exhibited tuber morphogenesis in the fibrous roots." example_title: "neither" metrics: - precision - recall - f1 model-index: - name: scientific-challenges-and-directions results: [] --- # scientific-challenges-and-directions We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the _challenges_ and _directions_ are defined as follows: * **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap. * **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration. * This model here is described in our paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751) (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results). * Our dataset can be found [here](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset). * Please cite our paper if you use our datasets or models in your project. See the [BibTeX](#citation). * Feel free to [email us](#contact-us). * Also, check out [our search engine](https://challenges.apps.allenai.org/), as an example application. ## Model description This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [scientific-challenges-and-directions-dataset](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset), designed for multi-label text classification. ## Training and evaluation data The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751) ## Example notebook We include an example notebook that uses the model for inference in our [repo](https://github.com/Dan-La/scientific-challenges-and-directions). See `Inference_Notebook.ipynb`. A training notebook is also included. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning rate: 2e-05 - train batch size: 8 - eval batch size: 4 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr scheduler type: linear - lr scheduler warmup steps: 500 - num epochs: 30 ### Training results The achieves the following results on the test set: - Precision Challenge: 0.768719 - Recall Challenge: 0.780405 - F1 Challenge: 0.774518 - Precision Direction: 0.758112 - Recall Direction: 0.774096 - F1 Direction: 0.766021 - Precision (micro avg. on both labels): 0.764894 - Recall (micro avg. on both labels): 0.778139 - F1 (micro avg. on both labels): 0.771459 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3 ## Citation If using our dataset and models, please cite: ``` @misc{lahav2021search, title={A Search Engine for Discovery of Scientific Challenges and Directions}, author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope}, year={2021}, eprint={2108.13751}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact us Please don't hesitate to reach out. **Email:** `lahav@mail.tau.ac.il`,`tomh@allenai.org`.
Darkrider/covidbert_medmarco
[ "pytorch", "jax", "bert", "text-classification", "arxiv:2010.05987", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
"2021-03-07T15:23:21Z"
Fine-tuned CovidBERT on Med-Marco Dataset for passage ranking # CovidBERT-MedNLI This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses. The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**. It is further fine-tuned Med-Marco Dataset. MacAvaney et.al in their [paper](https://arxiv.org/abs/2010.05987) titled “SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search” used MedSyn a lexicon of layperson and expert terminology for various medical conditions to filter for medical questions. One can also replace this by UMLs ontologies but the beauty of MedSyn is that the terms are more general human conversation lingo and not terms based on scientific literature. Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba) **Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
Darkrider/covidbert_mednli
[ "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
# CovidBERT-MedNLI This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses. The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**. It is further fine-tuned on both MedNLI datasets available at Physionet. [ACL-BIONLP 2019](https://physionet.org/content/mednli-bionlp19/1.0.1/) [MedNLI from MIMIC](https://physionet.org/content/mednli/1.0.0/) Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba) **Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
DarshanDeshpande/marathi-distilbert
[ "pytorch", "tf", "distilbert", "fill-mask", "mr", "dataset:Oscar Corpus, News, Stories", "arxiv:1910.01108", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: - mr tags: - fill-mask license: apache-2.0 datasets: - Oscar Corpus, News, Stories widget: - text: "हा खरोखर चांगला [MASK] आहे." --- # Marathi DistilBERT ## Model description This model is an adaptation of DistilBERT (Victor Sanh et al., 2019) for Marathi language. This version of Marathi-DistilBERT is trained from scratch on approximately 11.2 million sentences. ``` DISCLAIMER This model has not been thoroughly tested and may contain biased opinions or inappropriate language. User discretion is advised ``` ## Training data The training data has been extracted from a variety of sources, mainly including: 1. Oscar Corpus 2. Marathi Newspapers 3. Marathi storybooks and articles The data is cleaned by removing all languages other than Marathi, while preserving common punctuations ## Training procedure The model is trained from scratch using an Adam optimizer with a learning rate of 1e-4 and default β1 and β2 values of 0.9 and 0.999 respectively with a total batch size of 256 on a v3-8 TPU and mask probability of 15%. ## Example ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="DarshanDeshpande/marathi-distilbert", tokenizer="DarshanDeshpande/marathi-distilbert", ) fill_mask("हा खरोखर चांगला [MASK] आहे.") ``` ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <h3>Authors </h3> <h5>1. Darshan Deshpande: <a href="https://github.com/DarshanDeshpande">GitHub</a>, <a href="https://www.linkedin.com/in/darshan-deshpande/">LinkedIn</a><h5> <h5>2. Harshavardhan Abichandani: <a href="https://github.com/Baras64">GitHub</a>, <a href="http://​www.linkedin.com/in/harsh-abhi">LinkedIn</a><h5>
Daryaflp/roberta-retrained_ru_covid
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - generated_from_trainer model-index: - name: roberta-retrained_ru_covid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-retrained_ru_covid This model is a fine-tuned version of [blinoff/roberta-base-russian-v0](https://huggingface.co/blinoff/roberta-base-russian-v0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8518 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Davlan/mt5_base_eng_yor_mt
[ "pytorch", "mt5", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
Hugging Face's logo --- language: - yo - en datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # mT5_base_eng_yor_mt ## Model description **mT5_base_yor_eng_mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá. Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for MT. ```python from transformers import MT5ForConditionalGeneration, T5Tokenizer model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_eng_yor_mt") tokenizer = T5Tokenizer.from_pretrained("google/mt5-base") input_string = "Where are you?" inputs = tokenizer.encode(input_string, return_tensors="pt") generated_tokens = model.generate(inputs) results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (BLEU score) 9.82 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) ### BibTeX entry and citation info By David Adelani ``` ```
Davlan/mt5_base_yor_eng_mt
[ "pytorch", "mt5", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
Hugging Face's logo --- language: - yo - en datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # mT5_base_yor_eng_mt ## Model description **mT5_base_yor_eng_mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English. Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for MT. ```python from transformers import MT5ForConditionalGeneration, T5Tokenizer model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_yor_eng_mt") tokenizer = T5Tokenizer.from_pretrained("google/mt5-base") input_string = "Akọni ajìjàgbara obìnrin tó sun àtìmalé torí owó orí" inputs = tokenizer.encode(input_string, return_tensors="pt") generated_tokens = model.generate(inputs) results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (BLEU score) 15.57 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) ### BibTeX entry and citation info By David Adelani ``` ```
Davlan/naija-twitter-sentiment-afriberta-large
[ "pytorch", "tf", "xlm-roberta", "text-classification", "arxiv:2201.08277", "transformers", "has_space" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
61
null
Hugging Face's logo --- language: - hau - ibo - pcm - yor - multilingual --- # naija-twitter-sentiment-afriberta-large ## Model description **naija-twitter-sentiment-afriberta-large** is the first multilingual twitter **sentiment classification** model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model. It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [NaijaSenti corpus](https://github.com/hausanlp/NaijaSenti). The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from [NaijaSenti](https://github.com/hausanlp/NaijaSenti) dataset. ## Intended uses & limitations #### How to use You can use this model with Transformers for Sentiment Classification. ```python from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax MODEL = "Davlan/naija-twitter-sentiment-afriberta-large" tokenizer = AutoTokenizer.from_pretrained(MODEL) # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) text = "I like you" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) id2label = {0:"positive", 1:"neutral", 2:"negative"} ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = id2label[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` #### Limitations and bias This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains. ## Training procedure This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the [original NaijaSenti paper](https://arxiv.org/abs/2201.08277). ## Eval results on Test set (F-score), average over 5 runs. language|F1-score -|- hau |81.2 ibo |80.8 pcm |74.5 yor |80.4 ### BibTeX entry and citation info ``` @inproceedings{Muhammad2022NaijaSentiAN, title={NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis}, author={Shamsuddeen Hassan Muhammad and David Ifeoluwa Adelani and Sebastian Ruder and Ibrahim Said Ahmad and Idris Abdulmumin and Bello Shehu Bello and Monojit Choudhury and Chris C. Emezue and Saheed Salahudeen Abdullahi and Anuoluwapo Aremu and Alipio Jeorge and Pavel B. Brazdil}, year={2022} } ```
Davlan/xlm-roberta-base-finetuned-igbo
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
68
null
Hugging Face's logo --- language: ig datasets: --- # xlm-roberta-base-finetuned-igbo ## Model description **xlm-roberta-base-finetuned-igbo** is a **Igbo RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Hausa language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Igbo corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-igbo') >>> unmasker("Reno Omokri na Gọọmentị <mask> enweghị ihe ha ga-eji hiwe ya bụ mmachi.") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| XLM-R F1 | ig_roberta F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 84.51 | 87.74 ### BibTeX entry and citation info By David Adelani ``` ```
Davlan/xlm-roberta-base-finetuned-kinyarwanda
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
61
null
Hugging Face's logo --- language: rw datasets: --- # xlm-roberta-base-finetuned-kinyarwanda ## Model description **xlm-roberta-base-finetuned-kinyarwanda** is a **Kinyarwanda RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Kinyarwanda language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Kinyarwanda corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-kinyarwanda') >>> unmasker("Twabonye ko igihe mu <mask> hazaba hari ikirango abantu bakunze") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + [KIRNEWS](https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus) + [BBC Gahuza](https://www.bbc.com/gahuza) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| XLM-R F1 | rw_roberta F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 73.22 | 77.76 ### BibTeX entry and citation info By David Adelani ``` ```
Davlan/xlm-roberta-base-finetuned-swahili
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
40
"2021-05-25T09:23:37Z"
Hugging Face's logo --- language: sw datasets: --- # xlm-roberta-base-finetuned-swahili ## Model description **xlm-roberta-base-finetuned-swahili** is a **Swahili RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Swahili language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Swahili corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili') >>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko <mask> kwamba hakuna uhalifu ulitendwa") [{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Ufaransa kwamba hakuna uhalifu ulitendwa', 'score': 0.5077782273292542, 'token': 190096, 'token_str': 'Ufaransa'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa', 'score': 0.3657738268375397, 'token': 7270, 'token_str': 'Paris'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Gabon kwamba hakuna uhalifu ulitendwa', 'score': 0.01592041552066803, 'token': 176392, 'token_str': 'Gabon'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa', 'score': 0.010881908237934113, 'token': 9942, 'token_str': 'France'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa', 'score': 0.009554869495332241, 'token': 185918, 'token_str': 'Marseille'}] ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| XLM-R F1 | sw_roberta F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.55 | 89.46 ### BibTeX entry and citation info By David Adelani ``` ```
Davlan/xlm-roberta-base-wikiann-ner
[ "pytorch", "tf", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
235
"2022-02-25T23:02:56Z"
Hugging Face's logo --- language: - ar - as - bn - ca - en - es - eu - fr - gu - hi - id - ig - mr - pa - pt - sw - ur - vi - yo - zh - multilingual datasets: - wikiann --- # xlm-roberta-base-wikiann-ner ## Model description **xlm-roberta-base-wikiann-ner** is the first **Named Entity Recognition** model for 20 languages (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of languages datasets obtained from [WikiANN](https://huggingface.co/datasets/wikiann) dataset. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base-wikiann-ner") model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-base-wikiann-ner") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Ìbọn ń ró kù kù gẹ́gẹ́ bí ọwọ́ ọ̀pọ̀ aráàlù ṣe tẹ ìbọn ní Kyiv láti dojú kọ Russia" ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on 20 NER datasets (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese)[wikiann](https://huggingface.co/datasets/wikiann). The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location ### BibTeX entry and citation info ```
Davlan/xlm-roberta-large-masakhaner
[ "pytorch", "tf", "xlm-roberta", "token-classification", "arxiv:2103.11811", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,449
null
Hugging Face's logo --- language: - amh - hau - ibo - kin - lug - luo - pcm - swa - wol - yor - multilingual datasets: - masakhaner --- # xlm-roberta-large-masakhaner ## Model description **xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-masakhaner") model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-masakhaner") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria" ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location ## Training procedure This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus. ## Eval results on Test set (F-score) language|F1-score -|- amh |75.76 hau |91.75 ibo |86.26 kin |76.38 lug |84.64 luo |80.65 pcm |89.55 swa |89.48 wol |70.70 yor |82.05 ### BibTeX entry and citation info ``` @article{adelani21tacl, title = {Masakha{NER}: Named Entity Recognition for African Languages}, author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei}, journal = {Transactions of the Association for Computational Linguistics (TACL)}, month = {}, url = {https://arxiv.org/abs/2103.11811}, year = {2021} } ```
DeadBeast/emoBERTTamil
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:tamilmixsentiment", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tamilmixsentiment metrics: - accuracy model_index: - name: emoBERTTamil results: - task: name: Text Classification type: text-classification dataset: name: tamilmixsentiment type: tamilmixsentiment args: default metric: name: Accuracy type: accuracy value: 0.671 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emoBERTTamil This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tamilmixsentiment dataset. It achieves the following results on the evaluation set: - Loss: 0.9666 - Accuracy: 0.671 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1128 | 1.0 | 250 | 1.0290 | 0.672 | | 1.0226 | 2.0 | 500 | 1.0172 | 0.686 | | 0.9137 | 3.0 | 750 | 0.9666 | 0.671 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
DeepChem/SmilesTokenizer_PubChem_1M
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
"2021-05-31T20:43:46Z"
RoBERTa model trained on 1M SMILES from PubChem 77M set in MoleculeNet. Uses Smiles-Tokenizer
DeepESP/gpt2-spanish-medium
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
340
null
--- language: es tags: - GPT-2 - Spanish - ebooks - nlg datasets: - ebooks widget: - text: "Quisiera saber que va a suceder" license: mit --- # GPT2-Spanish GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the medium version of the original OpenAI GPT2 model. ## Corpus This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization). ## Tokenizer The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens. This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages. Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training. ## Training The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers. ## Authors The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h). Thanks to the members of the community who collaborated with funding for the initial tests. ## Cautions The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
DeepESP/gpt2-spanish
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,463
null
--- language: es tags: - GPT-2 - Spanish - ebooks - nlg datasets: - ebooks widget: - text: "Quisiera saber que va a suceder" license: mit --- # GPT2-Spanish GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model. ## Corpus This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization). ## Tokenizer The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens. This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages. Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training. ## Training The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers. ## Authors The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h). Thanks to the members of the community who collaborated with funding for the initial tests. ## Cautions The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
DeepPavlov/bert-base-bg-cs-pl-ru-cased
[ "pytorch", "jax", "bert", "feature-extraction", "bg", "cs", "pl", "ru", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,614
null
--- language: - bg - cs - pl - ru --- # bert-base-bg-cs-pl-ru-cased SlavicBERT\[1\] \(Slavic \(bg, cs, pl, ru\), cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on Russian News and four Wikipedias: Bulgarian, Czech, Polish, and Russian. Subtoken vocabulary was built using this data. Multilingual BERT was used as an initialization for SlavicBERT. 08.11.2021: upload model with MLM and NSP heads \[1\]: Arkhipov M., Trofimova M., Kuratov Y., Sorokin A. \(2019\). [Tuning Multilingual Transformers for Language-Specific Named Entity Recognition](https://www.aclweb.org/anthology/W19-3712/). ACL anthology W19-3712.
DeepPavlov/distilrubert-tiny-cased-conversational
[ "pytorch", "distilbert", "ru", "arxiv:2205.02340", "transformers" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5,993
null
--- language: - ru --- WARNING: This is `distilrubert-small-cased-conversational` model uploaded with wrong name. This one is the same as [distilrubert-small-cased-conversational](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational). `distilrubert-tiny-cased-conversational` could be found in [distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1). # distilrubert-small-cased-conversational Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational). Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used * KL loss (between teacher and student output logits) * MLM loss (between tokens labels and student output logits) * Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student) * MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student) The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb. To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency). All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb. | Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. | |-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------| | Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 | | Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 | To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models). # Citation If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper: ``` @misc{https://doi.org/10.48550/arxiv.2205.02340, doi = {10.48550/ARXIV.2205.02340}, url = {https://arxiv.org/abs/2205.02340}, author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` \[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\) \[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017. \[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. \[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
DeepPavlov/roberta-large-winogrande
[ "pytorch", "roberta", "text-classification", "en", "dataset:winogrande", "arxiv:1907.11692", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
348
null
--- language: - en datasets: - winogrande widget: - text: "The roof of Rachel's home is old and falling apart, while Betty's is new. The home value of </s> Rachel is lower." - text: "The wooden doors at my friends work are worse than the wooden desks at my work, because the </s> desks material is cheaper." - text: "Postal Service were to reduce delivery frequency. </s> The postal service could deliver less frequently." - text: "I put the cake away in the refrigerator. It has a lot of butter in it. </s> The cake has a lot of butter in it." --- # RoBERTa Large model fine-tuned on Winogrande This model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences with corresponding options filled in were separated, shuffled and classified independently of each other. ## Model description ## Intended use & limitations ### How to use ## Training data [WinoGrande-XL](https://huggingface.co/datasets/winogrande) reformatted the following way: 1. Each sentence was split on "`_`" placeholder symbol. 2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs. 3. Text segment pairs corresponding to correct and incorrect options were marked with `True` and `False` labels accordingly. 4. Text segment pairs were shuffled thereafter. For example, ```json { "answer": "2", "option1": "plant", "option2": "urn", "sentence": "The plant took up too much room in the urn, because the _ was small." } ``` becomes ```json { "sentence1": "The plant took up too much room in the urn, because the ", "sentence2": "plant was small.", "label": false } ``` and ```json { "sentence1": "The plant took up too much room in the urn, because the ", "sentence2": "urn was small.", "label": true } ``` These sentence pairs are then treated as independent examples. ### BibTeX entry and citation info ```bibtex @article{sakaguchi2019winogrande, title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale}, author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin}, journal={arXiv preprint arXiv:1907.10641}, year={2019} } @article{DBLP:journals/corr/abs-1907-11692, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach}, journal = {CoRR}, volume = {abs/1907.11692}, year = {2019}, url = {http://arxiv.org/abs/1907.11692}, archivePrefix = {arXiv}, eprint = {1907.11692}, timestamp = {Thu, 01 Aug 2019 08:59:33 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
DeepPavlov/rubert-base-cased-conversational
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17,362
null
--- language: - ru --- # rubert-base-cased-conversational Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with [RuBERT](../rubert-base-cased). 08.11.2021: upload model with MLM and NSP heads \[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\) \[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
DeepPavlov/rubert-base-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1908.10084", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46,991
null
--- language: - ru --- # rubert-base-cased-sentence Sentence RuBERT \(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\[1\] google-translated to russian and on russian part of XNLI dev set\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\]. \[1\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \(2015\) A large annotated corpus for learning natural language inference. arXiv preprint [arXiv:1508.05326](https://arxiv.org/abs/1508.05326) \[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053) \[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
DeepPavlov/rubert-base-cased
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
148,127
null
--- language: - ru --- # rubert-base-cased RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\]. 08.11.2021: upload model with MLM and NSP heads \[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
DeepPavlov/xlm-roberta-large-en-ru-mnli
[ "pytorch", "xlm-roberta", "text-classification", "en", "ru", "dataset:glue", "dataset:mnli", "transformers", "xlm-roberta-large", "xlm-roberta-large-en-ru", "xlm-roberta-large-en-ru-mnli", "has_space" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
null
--- language: - en - ru datasets: - glue - mnli model_index: - name: mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli tags: - xlm-roberta - xlm-roberta-large - xlm-roberta-large-en-ru - xlm-roberta-large-en-ru-mnli widget: - text: "Люблю тебя. Ненавижу тебя" - text: "I love you. I hate you" --- # XLM-RoBERTa-Large-En-Ru-MNLI xlm-roberta-large-en-ru finetuned on mnli.
DeepPavlov/xlm-roberta-large-en-ru
[ "pytorch", "xlm-roberta", "feature-extraction", "en", "ru", "transformers" ]
feature-extraction
{ "architectures": [ "XLMRobertaModel" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
190
null
--- language: - en - ru --- # XLM-RoBERTa-Large-En-Ru ## Model description This model is a version XLM-RoBERTa with embeddings and vocabulary reduced to most frequent tokens in English and Russian.
Dev-DGT/food-dbert-multiling
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- widget: - text: "El paciente se alimenta de pan, sopa de calabaza y coca-cola" --- # Token classification for FOODs. Detects foods in sentences. Currently, only supports spanish. Multiple words foods are detected as one entity. ## To-do - English support. - Negation support. - Quantity tags. - Psychosocial tags.
Devid/DialoGPT-small-Miku
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - conversational --- # Miku DialogGPT Model
Devrim/prism-default
[ "license:mit" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- The default Prism model available at https://github.com/thompsonb/prism. See the [README.md](https://github.com/thompsonb/prism/blob/master/README.md) file for more information. **LICENCE NOTICE** ``` MIT License Copyright (c) Brian Thompson Portions of this software are copied from fairseq (https://github.com/pytorch/fairseq), which is released under the MIT License and Copyright (c) Facebook, Inc. and its affiliates. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ```
DewiBrynJones/wav2vec2-large-xlsr-welsh
[ "cy", "dataset:common_voice", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: cy datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: wav2vec2-xlsr-welsh (by Dewi Bryn Jones, fine tuning week - March 2021) results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice cy type: common_voice args: cy metrics: - name: Test WER type: wer value: 25.59% --- # Wav2Vec2-Large-XLSR-Welsh This model has moved to https://huggingface.co/techiaith/wav2vec2-xlsr-ft-cy
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: opus-mt-en-ro-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 27.9273 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2915 - Bleu: 27.9273 - Gen Len: 34.0935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
Doiman/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- tags: - conversational --- # Harry Potter DialoGPT Medium Model
DongHai/DialoGPT-small-rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - conversational --- # Rick DialoGPT Model
DongHyoungLee/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.535587402888147 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7335 - Matthews Correlation: 0.5356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 | | 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 | | 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 | | 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 | | 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Dongjae/mrc2reader
[ "pytorch", "xlm-roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "XLMRobertaForQuestionAnswering" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
"2021-05-12T10:45:01Z"
The Reader model is for Korean Question Answering The backbone model is deepset/xlm-roberta-large-squad2. It is a finetuned model with KorQuAD-v1 dataset. As a result of verification using KorQuAD evaluation dataset, it showed approximately 87% and 92% respectively for the EM score and F1 score. Thank you
Waynehillsdev/Wayne_NLP_mT5
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: Wayne_NLP_mT5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wayne_NLP_mT5 This model was trained only english datasets. if you want trained korean + english model go to wayne_mulang_mT5. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0a0+3fd9dcf - Datasets 1.18.3 - Tokenizers 0.11.0
Waynehillsdev/Waynehills-STT-doogie-server
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
61
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: name: Waynehills-STT-doogie-server --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Waynehills-STT-doogie-server This model is a fine-tuned version of [Doogie/Waynehills-STT-doogie-server](https://huggingface.co/Doogie/Waynehills-STT-doogie-server) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
Waynehillsdev/Waynehills_summary_tensorflow
[ "tf", "t5", "text2text-generation", "transformers", "generated_from_keras_callback", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - generated_from_keras_callback model-index: - name: Waynehills_summary_tensorflow results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Waynehills_summary_tensorflow This model is a fine-tuned version of [KETI-AIR/ke-t5-base-ko](https://huggingface.co/KETI-AIR/ke-t5-base-ko) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
Waynehillsdev/wav2vec2-base-timit-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4180 - Wer: 0.3392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.656 | 4.0 | 500 | 1.8973 | 1.0130 | | 0.8647 | 8.0 | 1000 | 0.4667 | 0.4705 | | 0.2968 | 12.0 | 1500 | 0.4211 | 0.4035 | | 0.1719 | 16.0 | 2000 | 0.4725 | 0.3739 | | 0.1272 | 20.0 | 2500 | 0.4586 | 0.3543 | | 0.1079 | 24.0 | 3000 | 0.4356 | 0.3484 | | 0.0808 | 28.0 | 3500 | 0.4180 | 0.3392 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Doohae/roberta
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
Model for Extraction-based MRC original model : klue/roberta-large Designed for ODQA Competition
Doquey/DialoGPT-small-Luisbot1
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - conversational --- #Rick DialoGPT model
Doxophobia/DialoGPT-medium-celeste
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - conversational --- # Celestia Ludenburg DiabloGPT Model
distilbert-base-cased
[ "pytorch", "tf", "onnx", "distilbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "transformers", "license:apache-2.0", "has_space" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
574,859
"2022-01-17T05:33:33Z"
--- license: mit tags: - generated_from_keras_callback model-index: - name: dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
roberta-base
[ "pytorch", "tf", "jax", "rust", "safetensors", "roberta", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1907.11692", "arxiv:1806.02847", "transformers", "exbert", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10,881,731
"2022-01-29T04:55:35Z"
--- language: - as license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - as - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-300m-as-v9 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: hsb metrics: - name: Test WER type: wer value: 0.6163737676810973 - name: Test CER type: cer value: 0.19496397642093005 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: as metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 args: as metrics: - name: Test WER type: wer value: 61.64 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-as-v9 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.1679 - Wer: 0.5761 ### Evaluation Command 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Assamese (as) language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000111 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 8.3852 | 10.51 | 200 | 3.6402 | 1.0 | | 3.5374 | 21.05 | 400 | 3.3894 | 1.0 | | 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 | | 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 | | 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 | | 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 | | 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 | | 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 | | 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 | | 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 | | 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 | | 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 | | 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 | | 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 | | 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 | | 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 | | 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 | | 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 | | 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
Akash7897/fill_mask_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en thumbnail: "https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png" tags: - program-synthesis license: "mit" datasets: - program-synthesis --- # Program Synthesis Data Generated program synthesis datasets used to train [dreamcoder](https://github.com/ellisk42/ec). Currently just supports text & list data. ```python _FEATURES = datasets.Features( { "description": datasets.Value("string"), "input": datasets.Value("string"), "output": datasets.Value("string"), "types": datasets.Value("string") } ) ``` ![](https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png)
Akash7897/gpt2-wikitext2
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
# Transformer-VAE (WIP) A PyTorch Transformer-VAE model. Uses an MMD loss to prevent posterior collapse. Will setup in the next month or so. ## ToDo - [ ] Copy in old repo code. - [ ] Make a bunch of sample training runs. - [ ] Make an interpolation widget?
Akash7897/my-newtokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# Wiki-VAE A Transformer-VAE trained on all the sentences in wikipedia. Training is done on AWS SageMaker.
Akashpb13/Central_kurdish_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ckb", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
"2021-09-21T05:57:35Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: t5-small-finetuned-billsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum args: default metrics: - name: Rouge1 type: rouge value: 16.6044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-billsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.0972 - Rouge1: 16.6044 - Rouge2: 12.8656 - Rougel: 15.7876 - Rougelsum: 15.9784 - Gen Len: 18.9948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.3854 | 1.0 | 2369 | 2.0972 | 16.6044 | 12.8656 | 15.7876 | 15.9784 | 18.9948 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
Akashpb13/Hausa_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ha", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index", "has_space" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-finetuned-xsum-finetuned-billsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-finetuned-billsum This model is a fine-tuned version of [Frederick0291/t5-small-finetuned-xsum](https://huggingface.co/Frederick0291/t5-small-finetuned-xsum) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 330 | 1.8540 | 32.9258 | 14.9104 | 27.1067 | 27.208 | 18.8437 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
Akashpb13/Swahili_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sw", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/ro-bux_nc-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-onlyfans-hack-2021_oq-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-v-bucks-g1_zo-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-tiktok-fans-generator_sg-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/spins.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/pubg.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/google.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/7frtg.pdf
Akashpb13/xlsr_hungarian_new
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hu", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
"2022-02-05T11:27:19Z"
--- language: - nl tags: - automatic-speech-recognition - hf-asr-leaderboard - model_for_talk - mozilla-foundation/common_voice_8_0 - nl - nl_BE - nl_NL - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: xls-r-nl-v1-cv8-lm results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: nl metrics: - name: Test WER type: wer value: 4.06 - name: Test CER type: cer value: 1.22 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: nl metrics: - name: Test WER type: wer value: 17.77 - name: Test CER type: cer value: 9.77 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: nl metrics: - name: Test WER type: wer value: 16.32 --- # XLS-R-based CTC model with 5-gram language model from Open Subtitles This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): - Wer: 0.04057 - Cer: 0.01222 ## Model description The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus. ## Intended uses & limitations This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation). ## Training and evaluation data The model was: 0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16). 1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0). 2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). 3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0). ### Framework versions - Transformers 4.16.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
Akashpb13/xlsr_kurmanji_kurdish
[ "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "kmr", "ku", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
"2022-02-09T19:46:52Z"
--- language: - nl tags: - automatic-speech-recognition - hf-asr-leaderboard - model_for_talk - mozilla-foundation/common_voice_8_0 - nl - nl_BE - nl_NL - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: xls-r-nl-v1-cv8-lm results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: nl metrics: - name: Test WER type: wer value: 3.93 - name: Test CER type: cer value: 1.22 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: nl metrics: - name: Test WER type: wer value: 16.35 - name: Test CER type: cer value: 9.64 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: nl metrics: - name: Test WER type: wer value: 15.81 --- # XLS-R-based CTC model with 5-gram language model from Open Subtitles This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): - Wer: 0.03931 - Cer: 0.01224 > **IMPORTANT NOTE**: The `hunspell` typo fixer is **not enabled** on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the `eval.py` decoding script. For best results, please use the code in that file while using the model locally for inference. > **IMPORTANT NOTE**: Evaluating this model requires `apt install libhunspell-dev` and a pip install of `hunspell` in addition to pip installs of `pipy-kenlm` and `pyctcdecode` (see `install_requirements.sh`); in addition, the chunking lengths and strides were optimized for the model as `12s` and `2s` respectively (see `eval.sh`). > **QUICK REMARK**: The "Robust Speech Event" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance `2014` in the dev set is left as a number but will be recognized as `tweeduizend veertien`, which counts as 3 mistakes (`2014` missing, and both `tweeduizend` and `veertien` wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript (`ja`, etc...). As a result, our real error rate on the dev set is significantly lower than reported. > > ![Image showing the difference between the prediction and target of the dev set](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/resolve/main/dev_set_diff_4.png) > > You can compare the [predictions](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_predictions.txt) with the [targets](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_targets.txt) on the validation dev set yourself, for example using [this diffing tool](https://countwordsfree.com/comparetexts). > **WE DO SPEECH RECOGNITION**: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to [contact our team](https://www.ugent.be/ea/idlab/en/research/semantic-intelligence/speech-and-audio-processing.htm). This model was developped during the [Robust Speech Recognition challenge](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) event by [François REMY](https://www.linkedin.com/in/fremycompany/) [(twitter)](https://twitter.com/FremyCompany) and [Geoffroy VANDERREYDT](https://be.linkedin.com/in/geoffroy-vanderreydt-a4421460). > We would like to thank [OVH](https://www.ovhcloud.com/en/public-cloud/ai-training/) for providing us with a V100S GPU. ## Model description The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus. To further deal with typos, `hunspell` is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct `collegas` into `collega's` or `gogol` into `google`. ## Intended uses & limitations This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation). ## Training and evaluation data The model was: 0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16). 1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0). 2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). 3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0). ### Framework versions - Transformers 4.16.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
Akashpb13/xlsr_maltese_wav2vec2
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "mt", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
"2022-02-01T14:17:20Z"
--- language: - nl tags: - automatic-speech-recognition - hf-asr-leaderboard - model_for_talk - mozilla-foundation/common_voice_8_0 - nl - robust-speech-event - vl datasets: - mozilla-foundation/common_voice_8_0 - multilingual_librispeech model-index: - name: xls-r-nl-v1-cv8-lm results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: nl metrics: - name: Test WER type: wer value: 6.69 - name: Test CER type: cer value: 1.97 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: nl metrics: - name: Test WER type: wer value: 20.79 - name: Test CER type: cer value: 10.72 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: nl metrics: - name: Test WER type: wer value: 19.71 --- # XLS-R-based CTC model with 5-gram language model from Common Voice This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): - Wer: 0.0669 - Cer: 0.0197 ## Model description The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result. To improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus. ## Intended uses & limitations This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation). ## Training and evaluation data 0. The model was initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16). 1. The model was then trained `2000` iterations (batch size 32) on [the `dutch` configuration of the `multilingual_librispeech` dataset](https://huggingface.co/datasets/multilingual_librispeech/). 1. The model was then trained `2000` iterations (batch size 32) on [the `nl` configuration of the `common_voice_8_0` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0). 2. The model was then trained `6000` iterations (batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). 3. The model was then trained `6000` iterations (batch size 32) on [the `nl` configuation of the `common_voice_8_0` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0). ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
Akjder/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: bee-likes results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8333333134651184 --- # bee-likes Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bee ![bee](images/bee.jpg) #### hoverfly ![hoverfly](images/hoverfly.jpg) #### wasp ![wasp](images/wasp.jpg)
AkshatSurolia/BEiT-FaceMask-Finetuned
[ "pytorch", "beit", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible" ]
image-classification
{ "architectures": [ "BeitForImageClassification" ], "model_type": "beit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
239
null
--- tags: - conversational --- # Rick DialoGPT Model
AkshatSurolia/ConvNeXt-FaceMask-Finetuned
[ "pytorch", "safetensors", "convnext", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
image-classification
{ "architectures": [ "ConvNextForImageClassification" ], "model_type": "convnext", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
56
null
--- inference: false license: mit widget: language: - en metrics: - mrr datasets: - augmented_codesearchnet --- # 🔥 Augmented Code Model 🔥 This is Augmented Code Model which is a fined-tune model of [CodeBERT](https://huggingface.co/microsoft/codebert-base) for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4. ## How to use the model ? Similar to other huggingface model, you may load the model as follows. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Fujitsu/AugCode") model = AutoModelForSequenceClassification.from_pretrained("Fujitsu/AugCode") ``` Then you may use `model` to infer the similarity between a given docstring and code. ### Citation ```bibtex@misc{bahrami2021augcode, title={AugmentedCode: Examining the Effects of Natural Language Resources in Code Retrieval Models}, author={Mehdi Bahrami, N. C. Shrikanth, Yuji Mizobuchi, Lei Liu, Masahiro Fukuyori, Wei-Peng Chen, Kazuki Munakata}, year={2021}, eprint={TBA}, archivePrefix={TBA}, primaryClass={cs.CL} } ```
AkshatSurolia/DeiT-FaceMask-Finetuned
[ "pytorch", "deit", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible" ]
image-classification
{ "architectures": [ "DeiTForImageClassification" ], "model_type": "deit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46
null
--- license: mit widget: language: - en datasets: - pytorrent --- # 🔥 RoBERTa-MLM-based PyTorrent 1M 🔥 Pretrained weights based on [PyTorrent Dataset](https://github.com/fla-sil/PyTorrent) which is a curated data from a large official Python packages. We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources. ### Training Objective This model is trained with a Masked Language Model (MLM) objective. ## How to use the model? ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Fujitsu/pytorrent") model = AutoModel.from_pretrained("Fujitsu/pytorrent") ``` ## Citation Preprint: [https://arxiv.org/pdf/2110.01710.pdf](https://arxiv.org/pdf/2110.01710.pdf) ``` @misc{bahrami2021pytorrent, title={PyTorrent: A Python Library Corpus for Large-scale Language Models}, author={Mehdi Bahrami and N. C. Shrikanth and Shade Ruangwan and Lei Liu and Yuji Mizobuchi and Masahiro Fukuyori and Wei-Peng Chen and Kazuki Munakata and Tim Menzies}, year={2021}, eprint={2110.01710}, archivePrefix={arXiv}, primaryClass={cs.SE}, howpublished={https://arxiv.org/pdf/2110.01710}, } ```
AkshatSurolia/ICD-10-Code-Prediction
[ "pytorch", "bert", "transformers", "text-classification", "license:apache-2.0", "has_space" ]
text-classification
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
994
null
# MarkupLM Large fine-tuned on WebSRC to allow Question Answering. This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft. Test the question answering out in the [Markup QA space here](https://huggingface.co/spaces/FuriouslyAsleep/markupQAdemo) \--------------------------------------------------------------------------------- **Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)** ## Introduction (From Microsoft MarkupLM Large Model Card) MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei \--------------------------------------------------------------------------------- Fine-tuning args: --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4 ## Training was performed on only a small subset of the WebSRC: \ The number of total websites is 60 The train websites list is ['ga09'] The test websites list is [] The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08'] The number of processed websites is 60 \--------------------------------------------------------------------------------- Inference test here may not work. Use the transformers markuplm branch from [NielsRogge transformers markuplm branch](https://github.com/NielsRogge/transformers/tree/modeling_markuplm) After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict) model = MarkupLMForQuestionAnswering.from_pretrained("FuriouslyAsleep/markuplm-large-finetuned-qa") tokenizer = MarkupLMTokenizer( vocab_file="vocab.json", merges_file="merges.txt", tags_dict= {"a": 0, "abbr": 1, "acronym": 2, "address": 3, "altGlyph": 4, "altGlyphDef": 5, "altGlyphItem": 6, "animate": 7, "animateColor": 8, "animateMotion": 9, "animateTransform": 10, "applet": 11, "area": 12, "article": 13, "aside": 14, "audio": 15, "b": 16, "base": 17, "basefont": 18, "bdi": 19, "bdo": 20, "bgsound": 21, "big": 22, "blink": 23, "blockquote": 24, "body": 25, "br": 26, "button": 27, "canvas": 28, "caption": 29, "center": 30, "circle": 31, "cite": 32, "clipPath": 33, "code": 34, "col": 35, "colgroup": 36, "color-profile": 37, "content": 38, "cursor": 39, "data": 40, "datalist": 41, "dd": 42, "defs": 43, "del": 44, "desc": 45, "details": 46, "dfn": 47, "dialog": 48, "dir": 49, "div": 50, "dl": 51, "dt": 52, "ellipse": 53, "em": 54, "embed": 55, "feBlend": 56, "feColorMatrix": 57, "feComponentTransfer": 58, "feComposite": 59, "feConvolveMatrix": 60, "feDiffuseLighting": 61, "feDisplacementMap": 62, "feDistantLight": 63, "feFlood": 64, "feFuncA": 65, "feFuncB": 66, "feFuncG": 67, "feFuncR": 68, "feGaussianBlur": 69, "feImage": 70, "feMerge": 71, "feMergeNode": 72, "feMorphology": 73, "feOffset": 74, "fePointLight": 75, "feSpecularLighting": 76, "feSpotLight": 77, "feTile": 78, "feTurbulence": 79, "fieldset": 80, "figcaption": 81, "figure": 82, "filter": 83, "font-face-format": 84, "font-face-name": 85, "font-face-src": 86, "font-face-uri": 87, "font-face": 88, "font": 89, "footer": 90, "foreignObject": 91, "form": 92, "frame": 93, "frameset": 94, "g": 95, "glyph": 96, "glyphRef": 97, "h1": 98, "h2": 99, "h3": 100, "h4": 101, "h5": 102, "h6": 103, "head": 104, "header": 105, "hgroup": 106, "hkern": 107, "hr": 108, "html": 109, "i": 110, "iframe": 111, "image": 112, "img": 113, "input": 114, "ins": 115, "kbd": 116, "keygen": 117, "label": 118, "legend": 119, "li": 120, "line": 121, "linearGradient": 122, "link": 123, "main": 124, "map": 125, "mark": 126, "marker": 127, "marquee": 128, "mask": 129, "math": 130, "menu": 131, "menuitem": 132, "meta": 133, "metadata": 134, "meter": 135, "missing-glyph": 136, "mpath": 137, "nav": 138, "nobr": 139, "noembed": 140, "noframes": 141, "noscript": 142, "object": 143, "ol": 144, "optgroup": 145, "option": 146, "output": 147, "p": 148, "param": 149, "path": 150, "pattern": 151, "picture": 152, "plaintext": 153, "polygon": 154, "polyline": 155, "portal": 156, "pre": 157, "progress": 158, "q": 159, "radialGradient": 160, "rb": 161, "rect": 162, "rp": 163, "rt": 164, "rtc": 165, "ruby": 166, "s": 167, "samp": 168, "script": 169, "section": 170, "select": 171, "set": 172, "shadow": 173, "slot": 174, "small": 175, "source": 176, "spacer": 177, "span": 178, "stop": 179, "strike": 180, "strong": 181, "style": 182, "sub": 183, "summary": 184, "sup": 185, "svg": 186, "switch": 187, "symbol": 188, "table": 189, "tbody": 190, "td": 191, "template": 192, "text": 193, "textPath": 194, "textarea": 195, "tfoot": 196, "th": 197, "thead": 198, "time": 199, "title": 200, "tr": 201, "track": 202, "tref": 203, "tspan": 204, "tt": 205, "u": 206, "ul": 207, "use": 208, "var": 209, "video": 210, "view": 211, "vkern": 212, "wbr": 213, "xmp": 214}, add_prefix_space=True,) Go to [https://github.com/uwts/ProjectRisk](https://github.com/uwts/ProjectRisk) for sample script.
Aleksandar1932/distilgpt2-rock
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
https://github.com/GKLMIP/Pretrained-Models-For-Tagalog If you use our model, please consider citing our paper: ``` @InProceedings{, author="Jiang, Shengyi and Fu, Yingwen and Lin, Xiaotian and Lin, Nankai", title="Pre-trained Language models for Tagalog with Multi-source data", booktitle="Natural Language Processing and Chinese Computing", year="2021", publisher="Springer International Publishing", address="Cham", } ```
Amirosein/distilbert_v1
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
"2021-01-21T10:42:13Z"
--- language: - multilingual - en - fr - es - de - zh - ar - ru - pt - it - ur datasets: wikipedia license: apache-2.0 widget: - text: "Google generated 46 billion [MASK] in revenue." - text: "Paris is the capital of [MASK]." - text: "Algiers is the largest city in [MASK]." - text: "Paris est la [MASK] de la France." - text: "Paris est la capitale de la [MASK]." - text: "L'élection américaine a eu [MASK] en novembre 2020." - text: "تقع سويسرا في [MASK] أوروبا" - text: "إسمي محمد وأسكن في [MASK]." --- # bert-base-10lang-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. This model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) while being 22.5% smaller in size. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-10lang-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-10lang-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Multilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
Amit29/t5-small-finetuned-xsum
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: multilingual datasets: wikipedia license: apache-2.0 widget: - text: "Google generated 46 billion [MASK] in revenue." - text: "Paris is the capital of [MASK]." - text: "Algiers is the largest city in [MASK]." - text: "Paris est la [MASK] de la France." - text: "Paris est la capitale de la [MASK]." - text: "L'élection américaine a eu [MASK] en novembre 2020." - text: "تقع سويسرا في [MASK] أوروبا" - text: "إسمي محمد وأسكن في [MASK]." --- # bert-base-25lang-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-25lang-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-25lang-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Multilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
Andrija/SRoBERTa-base-NER
[ "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: - multilingual - en - zh datasets: wikipedia license: apache-2.0 widget: - text: "Google generated 46 billion [MASK] in revenue." - text: "Paris is the capital of [MASK]." - text: "Algiers is the largest city in [MASK]." --- # bert-base-en-zh-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-zh-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-en-zh-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
Andrija/SRoBERTa-base
[ "pytorch", "roberta", "fill-mask", "hr", "sr", "multilingual", "dataset:oscar", "dataset:leipzig", "transformers", "masked-lm", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
80
null
--- language: multilingual datasets: wikipedia license: apache-2.0 --- # bert-base-en-zh-hi-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-zh-hi-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-en-zh-hi-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
AnonymousSub/AR_EManuals-RoBERTa
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: multilingual datasets: wikipedia license: apache-2.0 --- # distilbert-base-en-el-ru-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-el-ru-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-el-ru-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: multilingual datasets: wikipedia license: apache-2.0 --- # distilbert-base-en-no-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-no-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-no-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
AnonymousSub/SR_rule_based_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: th datasets: wikipedia license: apache-2.0 --- # distilbert-base-th-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-th-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-th-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
AnonymousSub/SR_rule_based_bert_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: tr datasets: wikipedia license: apache-2.0 --- # distilbert-base-tr-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-tr-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-tr-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: - nl tags: - bert - passive - active license: apache-2.0 --- ## Dutch Fine-Tuned BERT For Passive/Active Voice Classification. ### Lijdende en Bedrijvende vorm classificatie voor zinnen #### Examples Try the following examples in the Hosted inference API: 1. Jan werd opgehaald door zijn moeder. 2. Wie niet weg is, is gezien 3. Ik ben van plan om morgen te gaan werken 4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen. 5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten. LABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend Answers (what they should be): 1. 1 2. 1 3. 0 4. 0 5. 1 #### Basic Information This model is fine-tuned on [BERTje](https://huggingface.co/GroNLP/bert-base-dutch-cased) for recognizing passive and active voice in Dutch sentences. Contact me at gerwindekruijf@gmail.com for further questions. Gerwin
AnonymousSub/SR_rule_based_twostagetriplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - spacy - text-classification language: - it model-index: - name: it_textcat_emotion_umberto results: [] ---
AnonymousSub/T5_pubmedqa_question_generation
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
<b>Speech-To-Text Chinese Model</b> <br/><br/> Reference: <br/> Model - https://huggingface.co/espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char <br/> Code - https://huggingface.co/spaces/akhaliq/espnet2_asr/blob/main/app.py
AnonymousSub/bert-base-uncased_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
# FongBERT FongBERT is a BERT model trained on 68.363 sentences in [Fon](https://en.wikipedia.org/wiki/Fon_language). The data are compiled from [JW300](https://opus.nlpl.eu/JW300.php) and other additional data I scraped from the [JW](https://www.jw.org/en/) website. It is the first pretrained model to leverage transfer learning for downtream tasks for Fon. Below are some examples of missing word prediction. from transformers import AutoTokenizer, AutoModelForMaskedLM from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Gilles/FongBERT") model = AutoModelForMaskedLM.from_pretrained("Gilles/FongBERT") fill = pipeline('fill-mask', model=model, tokenizer=tokenizer) #### Example 1 **Sentence 1**: un tuùn ɖɔ un jló na wazɔ̌ nú we . **Translation**: I know I have to work for you. **Masked Sentence**: un tuùn ɖɔ un jló na wazɔ̌ <"mask"> we . **Translation**: I know I have to work <"mask"> you. fill(f'un tuùn ɖɔ un jló na wazɔ̌ {fill.tokenizer.mask_token} we') [{'score': 0.994536280632019, 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌ nú we', 'token': 312, 'token_str': ' nú'}, {'score': 0.0015309195732697845, 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌nu we', ...........] #### Example 2 **Sentence 2**: un yi wan nu we ɖesu . **Translation**: I love you so much. **Masked Sentence**: un yi <"mask"> nu we ɖesu . **Translation**: I <"mask"> you so much. [{'score': 0.31483960151672363, 'sequence': 'un yi wan nu we ɖesu', 'token': 639, 'token_str': ' wan'}, {'score': 0.20940221846103668, 'sequence': 'un yi ba nu we ɖesu', ...........] #### Example 3 **Sentence 3**: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé . **Translation**: I went to my boyfriend for a while. **Masked Sentence**: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú <"mask"> ɖé . **Translation**: I went to my boyfriend for a <"mask">. [{'score': 0.934298574924469, 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé', 'token': 1102, 'token_str': ' táan'}, {'score': 0.03750855475664139, 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú ganxixo ɖé', ...........]
AnonymousSub/bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: places results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 1.0 --- # places Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Beach ![Beach](images/Beach.jpg) #### City ![City](images/City.jpg) #### Forest ![Forest](images/Forest.jpg)
AnonymousSub/bert_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
"2022-02-06T20:14:11Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: Mandarin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mandarin This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
AnonymousSub/consert-s10-SR
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 --- # Graphcore/bart-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations This model contains just the `IPUConfig` files for running the BART base model (e.g. [facebook/bart-base](https://huggingface.co/facebook/bart-base)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/bart-base-ipu") ```
AnonymousSub/consert-techqa
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
# Graphcore/bert-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model contains just the `IPUConfig` files for running the BERT base model (e.g. [bert-base-uncased](https://huggingface.co/bert-base-uncased) or [bert-base-cased](https://huggingface.co/bert-base-cased)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/bert-base-ipu") ```
AnonymousSub/declutr-biomed-roberta-papers
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
# Graphcore/bert-large-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. # Intended uses & limitations This model contains just the `IPUConfig` files for running the BERT large model (e.g. [bert-large-uncased](https://huggingface.co/bert-large-uncased) or [bert-large-cased](https://huggingface.co/bert-large-cased)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/bert-large-ipu") ```
AnonymousSub/declutr-emanuals-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: Graphcore/bert-large-uncased-squad results: [] --- # Graphcore/bert-large-uncased-squad Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model is a fine-tuned version of [Graphcore/bert-large-uncased](https://huggingface.co/Graphcore/bert-large-uncased) on the SQuAD dataset. ## Training and evaluation data Trained on SQuAD dataset: - [HuggingFace/squad](https://huggingface.co/datasets/squad) ## Training procedure Model was trained on 16 Graphcore Mk2 IPUs using the [optimum-graphcore](https://github.com/huggingface/optimum-graphcore) library.
AnonymousSub/declutr-emanuals-s10-SR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - Graphcore/wikipedia-bert-128 - Graphcore/wikipedia-bert-512 model-index: - name: Graphcore/bert-large-uncased results: [] --- # Graphcore/bert-large-uncased Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model is a pre-trained BERT-Large trained in two phases on the [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) and [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) datasets. ## Training and evaluation data Trained on wikipedia datasets: - [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) - [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) ## Training procedure Trained MLM and NSP pre-training scheme from [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962). Trained on 64 Graphcore Mk2 IPUs using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore) Command lines: Phase 1: ``` python examples/language-modeling/run_pretraining.py \ --config_name bert-large-uncased \ --tokenizer_name bert-large-uncased \ --ipu_config_name Graphcore/bert-large-ipu \ --dataset_name Graphcore/wikipedia-bert-128 \ --do_train \ --logging_steps 5 \ --max_seq_length 128 \ --max_steps 10550 \ --is_already_preprocessed \ --dataloader_num_workers 64 \ --dataloader_mode async_rebatched \ --lamb \ --lamb_no_bias_correction \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 512 \ --pod_type pod64 \ --learning_rate 0.006 \ --lr_scheduler_type linear \ --loss_scaling 32768 \ --weight_decay 0.01 \ --warmup_ratio 0.28 \ --config_overrides "layer_norm_eps=0.001" \ --ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \ --output_dir output-pretrain-bert-large-phase1 ``` Phase 2: ``` python examples/language-modeling/run_pretraining.py \ --config_name bert-large-uncased \ --tokenizer_name bert-large-uncased \ --model_name_or_path ./output-pretrain-bert-large-phase1 \ --ipu_config_name Graphcore/bert-large-ipu \ --dataset_name Graphcore/wikipedia-bert-512 \ --do_train \ --logging_steps 5 \ --max_seq_length 512 \ --max_steps 2038 \ --is_already_preprocessed \ --dataloader_num_workers 96 \ --dataloader_mode async_rebatched \ --lamb \ --lamb_no_bias_correction \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 512 \ --pod_type pod64 \ --learning_rate 0.002828 \ --lr_scheduler_type linear \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.128 \ --config_overrides "layer_norm_eps=0.001" \ --ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \ --output_dir output-pretrain-bert-large-phase2 ``` ### Training hyperparameters The following hyperparameters were used during phase 1 training: - learning_rate: 0.006 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 512 - total_train_batch_size: 65536 - total_eval_batch_size: 512 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.28 - training_steps: 10550 - training precision: Mixed Precision The following hyperparameters were used during phase 2 training: - learning_rate: 0.002828 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 512 - total_train_batch_size: 16384 - total_eval_batch_size: 512 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.128 - training_steps: 2038 - training precision: Mixed Precision ### Training results ``` train/epoch: 2.04 train/global_step: 2038 train/loss: 1.2002 train/train_runtime: 12022.3897 train/train_steps_per_second: 0.17 train/train_samples_per_second: 2777.367 ``` ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/declutr-emanuals-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
# Graphcore/deberta-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description DeBERTa([Decoding-enhanced BERT with Disentangled Attention ](https://arxiv.org/abs/2006.03654 )) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining. Through two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks. # Intended uses & limitations This model contains just the `IPUConfig` files for running the DeBERTa-base model (e.g. [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/deberta-base-ipu") ```