pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
token-classification | transformers | # CAMeLBERT-MSA POS-MSA Model
## Model description
**CAMeLBERT-MSA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999764, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.99991846, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998356, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99368894, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999426, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.9999339, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99996775, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99996895, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990183, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999347, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99931145, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-MSA POS-MSA Model
## Model description
CAMeLBERT-MSA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the PATB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-MSA POS-MSA Model",
"## Model description\nCAMeLBERT-MSA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the PATB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-MSA POS-MSA Model",
"## Model description\nCAMeLBERT-MSA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the PATB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
53,
12,
110,
37,
48
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# CAMeLBERT-MSA POS-MSA Model## Model description\nCAMeLBERT-MSA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the PATB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.## Intended uses\nYou can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-quarter** (`bert-base-arabic-camelbert-msa-quarter`), a model pre-trained on a quarter of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
|✔|`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.17437894642353058,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.042852893471717834,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو البقاء. [SEP]',
'score': 0.030925093218684196,
'token': 9331,
'token_str': 'البقاء'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.02964409440755844,
'token': 3088,
'token_str': 'الحب'},
{'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]',
'score': 0.028030086308717728,
'token': 17188,
'token_str': 'الكمال'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-MSA-quarter ('bert-base-arabic-camelbert-msa-quarter'), a model pre-trained on a quarter of the full MSA dataset.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* MSA (Modern Standard Arabic)
+ The Arabic Gigaword Fifth Edition
+ Abu El-Khair Corpus
+ OSIAN corpus
+ Arabic Wikipedia
+ The unshuffled version of the Arabic OSCAR corpus
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
55,
190,
139,
403,
4,
70
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.### Results### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
text-classification | transformers | # CAMeLBERT MSA SA Model
## Model description
**CAMeLBERT MSA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0623\u0646\u0627 \u0628\u062e\u064a\u0631"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT MSA SA Model
## Model description
CAMeLBERT MSA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT MSA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools SA component:
You can also use the SA model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT MSA SA Model",
"## Model description\nCAMeLBERT MSA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT MSA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT MSA SA Model",
"## Model description\nCAMeLBERT MSA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT MSA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
53,
7,
113,
36,
63
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# CAMeLBERT MSA SA Model## Model description\nCAMeLBERT MSA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.## Intended uses\nYou can use the CAMeLBERT MSA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-sixteenth** (`bert-base-arabic-camelbert-msa-sixteenth`), a model pre-trained on a sixteenth of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
|✔|`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو التغيير. [SEP]',
'score': 0.08320745080709457,
'token': 7946,
'token_str': 'التغيير'},
{'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]',
'score': 0.04305094853043556,
'token': 12554,
'token_str': 'التعلم'},
{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.0417640283703804,
'token': 2854,
'token_str': 'العمل'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.041371218860149384,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو المعرفة. [SEP]',
'score': 0.039794355630874634,
'token': 7344,
'token_str': 'المعرفة'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-MSA-sixteenth ('bert-base-arabic-camelbert-msa-sixteenth'), a model pre-trained on a sixteenth of the full MSA dataset.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* MSA (Modern Standard Arabic)
+ The Arabic Gigaword Fifth Edition
+ Abu El-Khair Corpus
+ OSIAN corpus
+ Arabic Wikipedia
+ The unshuffled version of the Arabic OSCAR corpus
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
59,
190,
139,
403,
4,
70
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.### Results### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA** (`bert-base-arabic-camelbert-msa`), a model pre-trained on the entire MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
|✔|`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.08507660031318665,
'token': 2854,
'token_str': 'العمل'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.058905381709337234,
'token': 3696, 'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.04660581797361374, 'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]',
'score': 0.04156001657247543,
'token': 12413, 'token_str': 'الربح'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.03534102067351341,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-MSA ('bert-base-arabic-camelbert-msa'), a model pre-trained on the entire MSA dataset.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* MSA (Modern Standard Arabic)
+ The Arabic Gigaword Fifth Edition
+ Abu El-Khair Corpus
+ OSIAN corpus
+ Arabic Wikipedia
+ The unshuffled version of the Arabic OSCAR corpus
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
55,
190,
139,
403,
4,
70
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.### Results### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
fill-mask | transformers | ## JavaBERT
A BERT-like model pretrained on Java software code.
### Training Data
The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A ```bert-base-uncased``` tokenizer is used by this model.
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Usage
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='CAUKiel/JavaBERT')
output = pipe(CODE) # Replace with Java code; Use '[MASK]' to mask tokens/words in the code.
``` | {"language": ["java", "code"], "license": "apache-2.0", "widget": [{"text": "public [MASK] isOdd(Integer num){if (num % 2 == 0) {return \"even\";} else {return \"odd\";}}"}]} | CAUKiel/JavaBERT-uncased | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"java",
"code"
] | TAGS
#transformers #pytorch #safetensors #bert #fill-mask #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ## JavaBERT
A BERT-like model pretrained on Java software code.
### Training Data
The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Usage
| [
"## JavaBERT\nA BERT-like model pretrained on Java software code.",
"### Training Data\nThe model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.",
"### Training Objective\nA MLM (Masked Language Model) objective was used to train this model.",
"### Usage"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## JavaBERT\nA BERT-like model pretrained on Java software code.",
"### Training Data\nThe model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.",
"### Training Objective\nA MLM (Masked Language Model) objective was used to train this model.",
"### Usage"
] | [
40,
17,
37,
21,
4
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n## JavaBERT\nA BERT-like model pretrained on Java software code.### Training Data\nThe model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.### Training Objective\nA MLM (Masked Language Model) objective was used to train this model.### Usage"
] |
fill-mask | transformers |
# Model Card for JavaBERT
A BERT-like model pretrained on Java software code.
# Model Details
## Model Description
A BERT-like model pretrained on Java software code.
- **Developed by:** Christian-Albrechts-University of Kiel (CAUKiel)
- **Shared by [Optional]:** Hugging Face
- **Model type:** Fill-Mask
- **Language(s) (NLP):** en
- **License:** Apache-2.0
- **Related Models:** A version of this model using an uncased tokenizer is available at [CAUKiel/JavaBERT-uncased](https://huggingface.co/CAUKiel/JavaBERT-uncased).
- **Parent Model:** BERT
- **Resources for more information:**
- [Associated Paper](https://arxiv.org/pdf/2110.10404.pdf)
# Uses
## Direct Use
Fill-Mask
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
{ see paper= word something)
# Training Details
## Training Data
The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A ```bert-base-cased``` tokenizer is used by this model.
## Training Procedure
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Preprocessing
More information needed.
### Speeds, Sizes, Times
More information needed.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed.
### Factors
### Metrics
More information needed.
## Results
More information needed.
# Model Examination
More information needed.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed.
- **Hours used:** More information needed.
- **Cloud Provider:** More information needed.
- **Compute Region:** More information needed.
- **Carbon Emitted:** More information needed.
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed.
## Compute Infrastructure
More information needed.
### Hardware
More information needed.
### Software
More information needed.
# Citation
**BibTeX:**
More information needed.
**APA:**
More information needed.
# Glossary [optional]
More information needed.
# More Information [optional]
More information needed.
# Model Card Authors [optional]
Christian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face
# Model Card Contact
More information needed.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='CAUKiel/JavaBERT')
output = pipe(CODE) # Replace with Java code; Use '[MASK]' to mask tokens/words in the code.
```
</details>
| {"language": ["code"], "license": "apache-2.0", "widget": [{"text": "public [MASK] isOdd(Integer num) {if (num % 2 == 0) {return \"even\";} else {return \"odd\";}}"}]} | CAUKiel/JavaBERT | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.10404",
"1910.09700"
] | [
"code"
] | TAGS
#transformers #pytorch #safetensors #bert #fill-mask #code #arxiv-2110.10404 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for JavaBERT
A BERT-like model pretrained on Java software code.
# Model Details
## Model Description
A BERT-like model pretrained on Java software code.
- Developed by: Christian-Albrechts-University of Kiel (CAUKiel)
- Shared by [Optional]: Hugging Face
- Model type: Fill-Mask
- Language(s) (NLP): en
- License: Apache-2.0
- Related Models: A version of this model using an uncased tokenizer is available at CAUKiel/JavaBERT-uncased.
- Parent Model: BERT
- Resources for more information:
- Associated Paper
# Uses
## Direct Use
Fill-Mask
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
{ see paper= word something)
# Training Details
## Training Data
The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.
## Training Procedure
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Preprocessing
More information needed.
### Speeds, Sizes, Times
More information needed.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed.
### Factors
### Metrics
More information needed.
## Results
More information needed.
# Model Examination
More information needed.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed.
- Hours used: More information needed.
- Cloud Provider: More information needed.
- Compute Region: More information needed.
- Carbon Emitted: More information needed.
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed.
## Compute Infrastructure
More information needed.
### Hardware
More information needed.
### Software
More information needed.
BibTeX:
More information needed.
APA:
More information needed.
# Glossary [optional]
More information needed.
# More Information [optional]
More information needed.
# Model Card Authors [optional]
Christian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face
# Model Card Contact
More information needed.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details>
| [
"# Model Card for JavaBERT\n \nA BERT-like model pretrained on Java software code.",
"# Model Details",
"## Model Description\n \nA BERT-like model pretrained on Java software code.\n \n- Developed by: Christian-Albrechts-University of Kiel (CAUKiel)\n- Shared by [Optional]: Hugging Face\n- Model type: Fill-Mask\n- Language(s) (NLP): en\n- License: Apache-2.0\n- Related Models: A version of this model using an uncased tokenizer is available at CAUKiel/JavaBERT-uncased.\n - Parent Model: BERT\n- Resources for more information: \n - Associated Paper",
"# Uses",
"## Direct Use\n \nFill-Mask",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n{ see paper= word something)",
"# Training Details",
"## Training Data\nThe model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.",
"## Training Procedure",
"### Training Objective\nA MLM (Masked Language Model) objective was used to train this model.",
"### Preprocessing\n \nMore information needed.",
"### Speeds, Sizes, Times\n \nMore information needed.",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\nMore information needed.",
"### Factors",
"### Metrics\n \nMore information needed.",
"## Results \nMore information needed.",
"# Model Examination\n \nMore information needed.",
"# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed.\n- Hours used: More information needed.\n- Cloud Provider: More information needed.\n- Compute Region: More information needed.\n- Carbon Emitted: More information needed.",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed.",
"## Compute Infrastructure\n \nMore information needed.",
"### Hardware\n \nMore information needed.",
"### Software\n \nMore information needed.\n \nBibTeX:\n \nMore information needed.\n \nAPA:\n \nMore information needed.",
"# Glossary [optional]\nMore information needed.",
"# More Information [optional]\n \nMore information needed.",
"# Model Card Authors [optional]\n \nChristian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face",
"# Model Card Contact\n \nMore information needed.",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n \n \n</details>"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #code #arxiv-2110.10404 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for JavaBERT\n \nA BERT-like model pretrained on Java software code.",
"# Model Details",
"## Model Description\n \nA BERT-like model pretrained on Java software code.\n \n- Developed by: Christian-Albrechts-University of Kiel (CAUKiel)\n- Shared by [Optional]: Hugging Face\n- Model type: Fill-Mask\n- Language(s) (NLP): en\n- License: Apache-2.0\n- Related Models: A version of this model using an uncased tokenizer is available at CAUKiel/JavaBERT-uncased.\n - Parent Model: BERT\n- Resources for more information: \n - Associated Paper",
"# Uses",
"## Direct Use\n \nFill-Mask",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n{ see paper= word something)",
"# Training Details",
"## Training Data\nThe model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.",
"## Training Procedure",
"### Training Objective\nA MLM (Masked Language Model) objective was used to train this model.",
"### Preprocessing\n \nMore information needed.",
"### Speeds, Sizes, Times\n \nMore information needed.",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\nMore information needed.",
"### Factors",
"### Metrics\n \nMore information needed.",
"## Results \nMore information needed.",
"# Model Examination\n \nMore information needed.",
"# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed.\n- Hours used: More information needed.\n- Cloud Provider: More information needed.\n- Compute Region: More information needed.\n- Carbon Emitted: More information needed.",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed.",
"## Compute Infrastructure\n \nMore information needed.",
"### Hardware\n \nMore information needed.",
"### Software\n \nMore information needed.\n \nBibTeX:\n \nMore information needed.\n \nAPA:\n \nMore information needed.",
"# Glossary [optional]\nMore information needed.",
"# More Information [optional]\n \nMore information needed.",
"# Model Card Authors [optional]\n \nChristian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face",
"# Model Card Contact\n \nMore information needed.",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n \n \n</details>"
] | [
63,
19,
3,
111,
2,
7,
11,
25,
70,
40,
3,
36,
4,
21,
11,
12,
2,
9,
9,
4,
9,
7,
7,
68,
6,
10,
8,
8,
23,
10,
10,
34,
8,
36
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #code #arxiv-2110.10404 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for JavaBERT\n \nA BERT-like model pretrained on Java software code.# Model Details## Model Description\n \nA BERT-like model pretrained on Java software code.\n \n- Developed by: Christian-Albrechts-University of Kiel (CAUKiel)\n- Shared by [Optional]: Hugging Face\n- Model type: Fill-Mask\n- Language(s) (NLP): en\n- License: Apache-2.0\n- Related Models: A version of this model using an uncased tokenizer is available at CAUKiel/JavaBERT-uncased.\n - Parent Model: BERT\n- Resources for more information: \n - Associated Paper# Uses## Direct Use\n \nFill-Mask## Downstream Use [Optional]\n \nMore information needed.## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n{ see paper= word something)# Training Details## Training Data\nThe model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A tokenizer is used by this model.## Training Procedure### Training Objective\nA MLM (Masked Language Model) objective was used to train this model.### Preprocessing\n \nMore information needed.### Speeds, Sizes, Times\n \nMore information needed.# Evaluation## Testing Data, Factors & Metrics### Testing Data\nMore information needed.### Factors### Metrics\n \nMore information needed.## Results \nMore information needed.# Model Examination\n \nMore information needed.# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed.\n- Hours used: More information needed.\n- Cloud Provider: More information needed.\n- Compute Region: More information needed.\n- Carbon Emitted: More information needed.# Technical Specifications [optional]## Model Architecture and Objective\n \nMore information needed.## Compute Infrastructure\n \nMore information needed.### Hardware\n \nMore information needed.### Software\n \nMore information needed.\n \nBibTeX:\n \nMore information needed.\n \nAPA:\n \nMore information needed.# Glossary [optional]\nMore information needed.# More Information [optional]\n \nMore information needed.# Model Card Authors [optional]\n \nChristian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face# Model Card Contact\n \nMore information needed.# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n \n \n</details>"
] |
translation | transformers | This model translate from English to Khmer.
It is the pure fine-tuned version of MarianMT model en-zh.
This is the result after 30 epochs of pure fine-tuning of khmer language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/en-km")
tokenizer = AutoTokenizer.from_pretrained("CLAck/en-km")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2khm>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2khm> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
``` | {"tags": ["translation"]} | CLAck/en-km | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #autotrain_compatible #endpoints_compatible #region-us
| This model translate from English to Khmer.
It is the pure fine-tuned version of MarianMT model en-zh.
This is the result after 30 epochs of pure fine-tuning of khmer language.
### Example
| [
"### Example"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #autotrain_compatible #endpoints_compatible #region-us \n",
"### Example"
] | [
32,
4
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #autotrain_compatible #endpoints_compatible #region-us \n### Example"
] |
translation | transformers |
This is a finetuning of a MarianMT pretrained on English-Chinese. The target language pair is English-Vietnamese.
The first phase of training (mixed) is performed on a dataset containing both English-Chinese and English-Vietnamese sentences.
The second phase of training (pure) is performed on a dataset containing only English-Vietnamese sentences.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/en-vi")
tokenizer = AutoTokenizer.from_pretrained("CLAck/en-vi")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2vi>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2vi> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 26.2407 |
| 2.0 | 32.6016 |
| 3.0 | 35.4060 |
| 4.0 | 36.6737 |
| 5.0 | 37.3774 |
PURE
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 37.3169 |
| 2.0 | 37.4407 |
| 3.0 | 37.6696 |
| 4.0 | 37.8765 |
| 5.0 | 38.0105 |
| {"language": ["en", "vi"], "license": "apache-2.0", "tags": ["translation"], "datasets": ["ALT"], "metrics": ["sacrebleu"]} | CLAck/en-vi | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"vi"
] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #en #vi #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This is a finetuning of a MarianMT pretrained on English-Chinese. The target language pair is English-Vietnamese.
The first phase of training (mixed) is performed on a dataset containing both English-Chinese and English-Vietnamese sentences.
The second phase of training (pure) is performed on a dataset containing only English-Vietnamese sentences.
### Example
### Training results
MIXED
PURE
| [
"### Example",
"### Training results\n\n\nMIXED\n\n\n\nPURE"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #vi #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Example",
"### Training results\n\n\nMIXED\n\n\n\nPURE"
] | [
49,
4,
7
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #vi #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Example### Training results\n\n\nMIXED\n\n\n\nPURE"
] |
translation | transformers |
This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-mixed")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-mixed")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 24.2579 |
| 2.0 | 30.6287 |
| 3.0 | 34.4417 |
| 4.0 | 36.2577 |
| 5.0 | 37.3488 |
FINETUNING
| Epoch | Bleu |
|:-----:|:-------:|
| 6.0 | 34.1676 |
| 7.0 | 35.2320 |
| 8.0 | 36.7110 |
| 9.0 | 37.3195 |
| 10.0 | 37.9461 | | {"language": ["en", "id"], "license": "apache-2.0", "tags": ["translation"], "datasets": ["ALT"], "metrics": ["sacrebleu"]} | CLAck/indo-mixed | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"id"
] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #en #id #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language.
### Example
### Training results
MIXED
FINETUNING
| [
"### Example",
"### Training results\n\n\nMIXED\n\n\n\nFINETUNING"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #id #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Example",
"### Training results\n\n\nMIXED\n\n\n\nFINETUNING"
] | [
49,
4,
9
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #id #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Example### Training results\n\n\nMIXED\n\n\n\nFINETUNING"
] |
translation | transformers | Pure fine-tuning version of MarianMT en-zh on Indonesian Language
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-pure")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-pure")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 15.9336 |
| 2.0 | 28.0175 |
| 3.0 | 31.6603 |
| 4.0 | 33.9151 |
| 5.0 | 35.0472 |
| 6.0 | 35.8469 |
| 7.0 | 36.1180 |
| 8.0 | 36.6018 |
| 9.0 | 37.1973 |
| 10.0 | 37.2738 | | {"language": ["en", "id"], "license": "apache-2.0", "tags": ["translation"], "datasets": ["ALT"], "metrics": ["sacrebleu"]} | CLAck/indo-pure | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"id"
] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #en #id #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Pure fine-tuning version of MarianMT en-zh on Indonesian Language
### Example
### Training results
| [
"### Example",
"### Training results"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #id #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Example",
"### Training results"
] | [
49,
4,
5
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #id #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Example### Training results"
] |
translation | transformers |
This is a finetuning of a MarianMT pretrained on Chinese-English. The target language pair is Vietnamese-English.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/vi-en")
tokenizer = AutoTokenizer.from_pretrained("CLAck/vi-en")
sentence = your_vietnamese_sentence
# This token is needed to identify the source language
input_sentence = "<2vi> " + sentence
translated = model.generate(**tokenizer(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 21.3180 |
| 2.0 | 26.8012 |
| 3.0 | 29.3578 |
| 4.0 | 31.5178 |
| 5.0 | 32.8740 |
| {"language": ["en", "vi"], "license": "apache-2.0", "tags": ["translation"], "datasets": ["ALT"], "metrics": ["sacrebleu"]} | CLAck/vi-en | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"vi"
] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #en #vi #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This is a finetuning of a MarianMT pretrained on Chinese-English. The target language pair is Vietnamese-English.
### Example
### Training results
| [
"### Example",
"### Training results"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #vi #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Example",
"### Training results"
] | [
49,
4,
5
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #vi #dataset-ALT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Example### Training results"
] |
fill-mask | transformers |
# MedRoBERTa.nl
## Description
This model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model.
## Intended use
The model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch.
## Data
The model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure.
## Privacy
By anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task.
## Authors
Stella Verkijk, Piek Vossen
## Reference
Paper: Verkijk, S. & Vossen, P. (2022) MedRoBERTa.nl: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11. | {"language": "nl", "license": "mit"} | CLTL/MedRoBERTa.nl | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"nl",
"doi:10.57967/hf/0960",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #fill-mask #nl #doi-10.57967/hf/0960 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# URL
## Description
This model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of URL can be found at URL
## Intended use
The model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch.
## Data
The model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure.
## Privacy
By anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task.
## Authors
Stella Verkijk, Piek Vossen
## Reference
Paper: Verkijk, S. & Vossen, P. (2022) URL: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11. | [
"# URL",
"## Description\nThis model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of URL can be found at URL",
"## Intended use\nThe model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch.",
"## Data\nThe model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure.",
"## Privacy\nBy anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task.",
"## Authors\nStella Verkijk, Piek Vossen",
"## Reference\nPaper: Verkijk, S. & Vossen, P. (2022) URL: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11."
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #nl #doi-10.57967/hf/0960 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# URL",
"## Description\nThis model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of URL can be found at URL",
"## Intended use\nThe model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch.",
"## Data\nThe model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure.",
"## Privacy\nBy anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task.",
"## Authors\nStella Verkijk, Piek Vossen",
"## Reference\nPaper: Verkijk, S. & Vossen, P. (2022) URL: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11."
] | [
49,
3,
49,
44,
36,
63,
13,
43
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #nl #doi-10.57967/hf/0960 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# URL## Description\nThis model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of URL can be found at URL## Intended use\nThe model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch.## Data\nThe model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure.## Privacy\nBy anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task.## Authors\nStella Verkijk, Piek Vossen## Reference\nPaper: Verkijk, S. & Vossen, P. (2022) URL: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11."
] |
token-classification | transformers |
# Early-modern Dutch NER (General Letters)
## Description
This is a fine-tuned NER model for early-modern Dutch United East India Company (VOC) letters based on XLM-R_base [(Conneau et al., 2020)](https://aclanthology.org/2020.acl-main.747/). The model identifies *locations*, *persons*, *organisations*, but also *ships* as well as derived forms of locations and religions.
## Intended uses and limitations
This model was fine-tuned (trained, validated and tested) on a single source of data, the General Letters (Generale Missiven). These letters span a large variety of Dutch, as they cover the largest part of the 17th and 18th centuries, and have been extended with editorial notes between 1960 and 2017. As the model was only fine-tuned on this data however, it may perform less well on other texts from the same period.
## How to use
The model can run on raw text through the *token-classification* pipeline:
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("CLTL/gm-ner-xlmrbase")
model = AutoModelForTokenClassification.from_pretrained("CLTL/gm-ner-xlmrbase")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Batavia heeft om advies gevraagd."
ner_results = nlp(example)
print(ner_results)
```
This outputs a list of entities with their character offsets in the input text:
```
[{'entity': 'B-LOC', 'score': 0.99739265, 'index': 1, 'word': '▁Bata', 'start': 0, 'end': 4}, {'entity': 'I-LOC', 'score': 0.5373179, 'index': 2, 'word': 'via', 'start': 4, 'end': 7}]
```
## Training data and tagset
The model was fine-tuned on the General Letters [GM-NER](https://github.com/cltl/voc-missives/tree/master/data/ner/datasplit_all_standard) dataset, with the following tagset:
| tag | description | notes |
| --- | ----------- | ----- |
| LOC | locations | |
| LOCderiv | derived forms of locations | by derivation, e.g. *Bandanezen*, or composition, e.g. *Javakoffie* |
| ORG | organisations | includes forms derived by composition, e.g. *Compagnieszaken*
| PER | persons |
| RELderiv | forms related to religion | merges religion names (*Christendom*), derived forms (*christenen*) and composed forms (*Christen-orangkay*) |
| SHP | ships |
The base text for this dataset is OCR text that has been partially corrected. The text is clean overall but errors remain.
## Training procedure
The model was fine-tuned with [xlm-roberta-base](https://huggingface.co/xlm-roberta-base), using [this script](https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py).
Non-default training parameters are:
* training batch size: 16
* max sequence length: 256
* number of epochs: 4 -- loading the best checkpoint model by loss at the end, with checkpoints every 200 steps
* (seed: 1)
## Evaluation
### Metric
* entity-level F1
### Results
| overall | 92.7 |
| --- | ----------- |
| LOC | 95.8 |
| LOCderiv | 92.7 |
| ORG | 92.5 |
| PER | 86.2 |
| RELderiv | 90.7 |
| SHP | 81.6 |
## Reference
The model and fine-tuning data presented here were developed as part of:
```bibtex
@inproceedings{arnoult-etal-2021-batavia,
title = "Batavia asked for advice. Pretrained language models for Named Entity Recognition in historical texts.",
author = "Arnoult, Sophie I. and
Petram, Lodewijk and
Vossen, Piek",
booktitle = "Proceedings of the 5th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic (online)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.latechclfl-1.3",
pages = "21--30"
}
```
| {"language": "nl", "license": "apache-2.0", "tags": ["dighum"], "pipeline_tag": "token-classification"} | CLTL/gm-ner-xlmrbase | null | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"dighum",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #tf #xlm-roberta #token-classification #dighum #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Early-modern Dutch NER (General Letters)
========================================
Description
-----------
This is a fine-tuned NER model for early-modern Dutch United East India Company (VOC) letters based on XLM-R\_base (Conneau et al., 2020). The model identifies *locations*, *persons*, *organisations*, but also *ships* as well as derived forms of locations and religions.
Intended uses and limitations
-----------------------------
This model was fine-tuned (trained, validated and tested) on a single source of data, the General Letters (Generale Missiven). These letters span a large variety of Dutch, as they cover the largest part of the 17th and 18th centuries, and have been extended with editorial notes between 1960 and 2017. As the model was only fine-tuned on this data however, it may perform less well on other texts from the same period.
How to use
----------
The model can run on raw text through the *token-classification* pipeline:
This outputs a list of entities with their character offsets in the input text:
Training data and tagset
------------------------
The model was fine-tuned on the General Letters GM-NER dataset, with the following tagset:
tag: LOC, description: locations, notes:
tag: LOCderiv, description: derived forms of locations, notes: by derivation, e.g. *Bandanezen*, or composition, e.g. *Javakoffie*
tag: ORG, description: organisations, notes: includes forms derived by composition, e.g. *Compagnieszaken*
tag: PER, description: persons, notes:
tag: RELderiv, description: forms related to religion, notes: merges religion names (*Christendom*), derived forms (*christenen*) and composed forms (*Christen-orangkay*)
tag: SHP, description: ships, notes:
The base text for this dataset is OCR text that has been partially corrected. The text is clean overall but errors remain.
Training procedure
------------------
The model was fine-tuned with xlm-roberta-base, using this script.
Non-default training parameters are:
* training batch size: 16
* max sequence length: 256
* number of epochs: 4 -- loading the best checkpoint model by loss at the end, with checkpoints every 200 steps
* (seed: 1)
Evaluation
----------
### Metric
* entity-level F1
### Results
Reference
---------
The model and fine-tuning data presented here were developed as part of:
| [
"### Metric\n\n\n* entity-level F1",
"### Results\n\n\n\nReference\n---------\n\n\nThe model and fine-tuning data presented here were developed as part of:"
] | [
"TAGS\n#transformers #pytorch #tf #xlm-roberta #token-classification #dighum #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Metric\n\n\n* entity-level F1",
"### Results\n\n\n\nReference\n---------\n\n\nThe model and fine-tuning data presented here were developed as part of:"
] | [
47,
9,
29
] | [
"TAGS\n#transformers #pytorch #tf #xlm-roberta #token-classification #dighum #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Metric\n\n\n* entity-level F1### Results\n\n\n\nReference\n---------\n\n\nThe model and fine-tuning data presented here were developed as part of:"
] |
text-classification | transformers |
# A-PROOF ICF-domains Classification
## Description
A fine-tuned multi-label classification model that detects 9 [WHO-ICF](https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health) domains in clinical text in Dutch. The model is based on a pre-trained Dutch medical language model ([link to be added]()), a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC.
## ICF domains
The model can detect 9 domains, which were chosen due to their relevance to recovery from COVID-19:
ICF code | Domain | name in repo
---|---|---
b440 | Respiration functions | ADM
b140 | Attention functions | ATT
d840-d859 | Work and employment | BER
b1300 | Energy level | ENR
d550 | Eating | ETN
d450 | Walking | FAC
b455 | Exercise tolerance functions | INS
b530 | Weight maintenance functions | MBW
b152 | Emotional functions | STM
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import MultiLabelClassificationModel
model = MultiLabelClassificationModel(
'roberta',
'CLTL/icf-domains',
use_cuda=False,
)
example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.'
predictions, raw_outputs = model.predict([example])
```
The predictions look like this:
```
[[1, 0, 0, 0, 0, 1, 1, 0, 0]]
```
The indices of the multi-label stand for:
```
[ADM, ATT, BER, ENR, ETN, FAC, INS, MBW, STM]
```
In other words, the above prediction corresponds to assigning the labels ADM, FAC and INS to the example sentence.
The raw outputs look like this:
```
[[0.51907885 0.00268032 0.0030862 0.03066113 0.00616694 0.64720929
0.67348498 0.0118863 0.0046311 ]]
```
For this model, the threshold at which the prediction for a label flips from 0 to 1 is **0.5**.
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
- Threshold: 0.5
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
### Sentence-level
| | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM
|---|---|---|---|---|---|---|---|---|---
precision | 0.98 | 0.98 | 0.56 | 0.96 | 0.92 | 0.84 | 0.89 | 0.79 | 0.70
recall | 0.49 | 0.41 | 0.29 | 0.57 | 0.49 | 0.71 | 0.26 | 0.62 | 0.75
F1-score | 0.66 | 0.58 | 0.35 | 0.72 | 0.63 | 0.76 | 0.41 | 0.70 | 0.72
support | 775 | 39 | 54 | 160 | 382 | 253 | 287 | 125 | 181
### Note-level
| | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM
|---|---|---|---|---|---|---|---|---|---
precision | 1.0 | 1.0 | 0.66 | 0.96 | 0.95 | 0.84 | 0.95 | 0.87 | 0.80
recall | 0.89 | 0.56 | 0.44 | 0.70 | 0.72 | 0.89 | 0.46 | 0.87 | 0.87
F1-score | 0.94 | 0.71 | 0.50 | 0.81 | 0.82 | 0.86 | 0.61 | 0.87 | 0.84
support | 231 | 27 | 34 | 92 | 165 | 95 | 116 | 64 | 94
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD | {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-domains | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #region-us
| A-PROOF ICF-domains Classification
==================================
Description
-----------
A fine-tuned multi-label classification model that detects 9 WHO-ICF domains in clinical text in Dutch. The model is based on a pre-trained Dutch medical language model (link to be added), a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC.
ICF domains
-----------
The model can detect 9 domains, which were chosen due to their relevance to recovery from COVID-19:
ICF code: b440, Domain: Respiration functions, name in repo: ADM
ICF code: b140, Domain: Attention functions, name in repo: ATT
ICF code: d840-d859, Domain: Work and employment, name in repo: BER
ICF code: b1300, Domain: Energy level, name in repo: ENR
ICF code: d550, Domain: Eating, name in repo: ETN
ICF code: d450, Domain: Walking, name in repo: FAC
ICF code: b455, Domain: Exercise tolerance functions, name in repo: INS
ICF code: b530, Domain: Weight maintenance functions, name in repo: MBW
ICF code: b152, Domain: Emotional functions, name in repo: STM
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The predictions look like this:
The indices of the multi-label stand for:
In other words, the above prediction corresponds to assigning the labels ADM, FAC and INS to the example sentence.
The raw outputs look like this:
For this model, the threshold at which the prediction for a label flips from 0 to 1 is 0.5.
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
* Threshold: 0.5
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
### Sentence-level
### Note-level
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Sentence-level",
"### Note-level\n\n\n\nAuthors and references\n----------------------",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #region-us \n",
"### Sentence-level",
"### Note-level\n\n\n\nAuthors and references\n----------------------",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
24,
6,
31,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #region-us \n### Sentence-level### Note-level\n\n\n\nAuthors and references\n----------------------### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Respiration Functioning Levels (ICF b440)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with respiration, and/or respiratory rate is normal (EWS: 9-20).
3 | Shortness of breath in exercise (saturation ≥90), and/or respiratory rate is slightly increased (EWS: 21-30).
2 | Shortness of breath in rest (saturation ≥90), and/or respiratory rate is fairly increased (EWS: 31-35).
1 | Needs oxygen at rest or during exercise (saturation <90), and/or respiratory rate >35.
0 | Mechanical ventilation is needed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-adm',
use_cuda=False,
)
example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.26
```
The raw outputs look like this:
```
[[2.26074648]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.37
mean squared error | 0.55 | 0.34
root mean squared error | 0.74 | 0.58
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-adm | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Respiration Functioning Levels (ICF b440)
==============================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.48
Sentence-level: mean squared error, Note-level: 0.55
Sentence-level: root mean squared error, Note-level: 0.74
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Attention Functioning Levels (ICF b140)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with concentrating / directing / holding / dividing attention.
3 | Slight problem with concentrating / directing / holding / dividing attention for a longer period of time or for complex tasks.
2 | Can concentrate / direct / hold / divide attention only for a short time.
1 | Can barely concentrate / direct / hold / divide attention.
0 | Unable to concentrate / direct / hold / divide attention.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-att',
use_cuda=False,
)
example = 'Snel afgeleid, moeite aandacht te behouden.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.89
```
The raw outputs look like this:
```
[[2.89226103]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.99 | 1.03
mean squared error | 1.35 | 1.47
root mean squared error | 1.16 | 1.21
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-att | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Attention Functioning Levels (ICF b140)
============================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.99
Sentence-level: mean squared error, Note-level: 1.35
Sentence-level: root mean squared error, Note-level: 1.16
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Work and Employment Functioning Levels (ICF d840-d859)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing work and employment functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about work and employment functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Can work/study fully (like when healthy).
3 | Can work/study almost fully.
2 | Can work/study only for about 50\%, or can only work at home and cannot go to school / office.
1 | Work/study is severely limited.
0 | Cannot work/study.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-ber',
use_cuda=False,
)
example = 'Fysiek zwaar werk is niet mogelijk, maar administrative taken zou zij wel aan moeten kunnen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.41
```
The raw outputs look like this:
```
[[2.40793037]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 1.56 | 1.49
mean squared error | 3.06 | 2.85
root mean squared error | 1.75 | 1.69
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-ber | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Work and Employment Functioning Levels (ICF d840-d859)
===========================================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing work and employment functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about work and employment functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 1.56
Sentence-level: mean squared error, Note-level: 3.06
Sentence-level: root mean squared error, Note-level: 1.75
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Energy Levels (ICF b1300)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with the energy level.
3 | Slight fatigue that causes mild limitations.
2 | Moderate fatigue; the patient gets easily tired from light activities or needs a long time to recover after an activity.
1 | Severe fatigue; the patient is capable of very little.
0 | Very severe fatigue; unable to do anything and mostly lays in bed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-enr',
use_cuda=False,
)
example = 'Al jaren extreme vermoeidheid overdag, valt overdag in slaap tijdens school- en werkactiviteiten en soms zelfs tijdens een gesprek.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.98
```
The raw outputs look like this:
```
[[1.97520316]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.43
mean squared error | 0.49 | 0.42
root mean squared error | 0.70 | 0.65
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-enr | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Energy Levels (ICF b1300)
==============================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.48
Sentence-level: mean squared error, Note-level: 0.49
Sentence-level: root mean squared error, Note-level: 0.70
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Eating Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Can eat independently (in culturally acceptable ways), good intake, eats according to her/his needs.
3 | Can eat independently but with adjustments, and/or somewhat reduced intake (>75% of her/his needs), and/or good intake can be achieved with proper advice.
2 | Reduced intake, and/or stimulus / feeding modules / nutrition drinks are needed (but not tube feeding / TPN).
1 | Intake is severely reduced (<50% of her/his needs), and/or tube feeding / TPN is needed.
0 | Cannot eat, and/or fully dependent on tube feeding / TPN.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-etn',
use_cuda=False,
)
example = 'Sondevoeding is geïndiceerd'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
0.89
```
The raw outputs look like this:
```
[[0.8872931]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.59 | 0.50
mean squared error | 0.65 | 0.47
root mean squared error | 0.81 | 0.68
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-etn | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Eating Functioning Levels (ICF d550)
=========================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.59
Sentence-level: mean squared error, Note-level: 0.65
Sentence-level: root mean squared error, Note-level: 0.81
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Walking Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | Patient can walk independently anywhere: level surface, uneven surface, slopes, stairs.
4 | Patient can walk independently on level surface but requires help on stairs, inclines, uneven surface; or, patient can walk independently, but the walking is not fully normal.
3 | Patient requires verbal supervision for walking, without physical contact.
2 | Patient needs continuous or intermittent support of one person to help with balance and coordination.
1 | Patient needs firm continuous support from one person who helps carrying weight and with balance.
0 | Patient cannot walk or needs help from two or more people; or, patient walks on a treadmill.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-fac',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
4.2
```
The raw outputs look like this:
```
[[4.20903111]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.70 | 0.66
mean squared error | 0.91 | 0.93
root mean squared error | 0.95 | 0.96
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-fac | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Walking Functioning Levels (ICF d550)
==========================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.70
Sentence-level: mean squared error, Note-level: 0.91
Sentence-level: root mean squared error, Note-level: 0.95
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Exercise Tolerance Functioning Levels (ICF b455)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | MET>6. Can tolerate jogging, hard exercises, running, climbing stairs fast, sports.
4 | 4≤MET≤6. Can tolerate walking / cycling at a brisk pace, considerable effort (e.g. cycling from 16 km/h), heavy housework.
3 | 3≤MET<4. Can tolerate walking / cycling at a normal pace, gardening, exercises without equipment.
2 | 2≤MET<3. Can tolerate walking at a slow to moderate pace, grocery shopping, light housework.
1 | 1≤MET<2. Can tolerate sitting activities.
0 | 0≤MET<1. Can physically tolerate only recumbent activities.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-ins',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
3.13
```
The raw outputs look like this:
```
[[3.1300993]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.69 | 0.61
mean squared error | 0.80 | 0.64
root mean squared error | 0.89 | 0.80
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-ins | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Exercise Tolerance Functioning Levels (ICF b455)
=====================================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.69
Sentence-level: mean squared error, Note-level: 0.80
Sentence-level: root mean squared error, Note-level: 0.89
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Weight Maintenance Functioning Levels (ICF b530)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Healthy weight, no unintentional weight loss or gain, SNAQ 0 or 1.
3 | Some unintentional weight loss or gain, or lost a lot of weight but gained some of it back afterwards.
2 | Moderate unintentional weight loss or gain (more than 3 kg in the last month), SNAQ 2.
1 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months), SNAQ ≥ 3.
0 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months) and admitted to ICU.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-mbw',
use_cuda=False,
)
example = 'Tijdens opname >10 kg afgevallen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.95
```
The raw outputs look like this:
```
[[1.95429301]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.81 | 0.60
mean squared error | 0.83 | 0.56
root mean squared error | 0.91 | 0.75
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-mbw | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Weight Maintenance Functioning Levels (ICF b530)
=====================================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.81
Sentence-level: mean squared error, Note-level: 0.83
Sentence-level: root mean squared error, Note-level: 0.91
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers |
# Regression Model for Emotional Functioning Levels (ICF b152)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with emotional functioning: emotions are appropriate, well regulated, etc.
3 | Slight problem with emotional functioning: irritable, gloomy, etc.
2 | Moderate problem with emotional functioning: negative emotions, such as fear, anger, sadness, etc.
1 | Severe problem with emotional functioning: intense negative emotions, such as fear, anger, sadness, etc.
0 | Flat affect, apathy, unstable, inappropriate emotions.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-stm',
use_cuda=False,
)
example = 'Naarmate het somatische beeld een herstellende trend laat zien, valt op dat patient zich depressief en suicidaal uit.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.60
```
The raw outputs look like this:
```
[[1.60418844]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.76 | 0.68
mean squared error | 1.03 | 0.87
root mean squared error | 1.01 | 0.93
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| {"language": "nl", "license": "mit", "pipeline_tag": "text-classification", "inference": false} | CLTL/icf-levels-stm | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us
| Regression Model for Emotional Functioning Levels (ICF b152)
============================================================
Description
-----------
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
------------------
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
-----------------------------
* The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
* The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers 'pipeline' and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
----------
To generate predictions with the model, use the Simple Transformers library:
The prediction on the example is:
The raw outputs look like this:
Training data
-------------
* The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
* The annotation guidelines used for the project can be found here.
Training procedure
------------------
The default training parameters of Simple Transformers were used, including:
* Optimizer: AdamW
* Learning rate: 4e-5
* Num train epochs: 1
* Train batch size: 8
Evaluation results
------------------
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level: mean absolute error, Note-level: 0.76
Sentence-level: mean squared error, Note-level: 1.03
Sentence-level: root mean squared error, Note-level: 1.01
Authors and references
----------------------
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| [
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n",
"### Authors\n\n\nJenia Kim, Piek Vossen",
"### References\n\n\nTBD"
] | [
29,
12,
6
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #nl #license-mit #autotrain_compatible #region-us \n### Authors\n\n\nJenia Kim, Piek Vossen### References\n\n\nTBD"
] |
text-classification | transformers | emilyalsentzer/Bio_ClinicalBERT with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing."
Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018
Bio_ClinicalBERT_for_seizureFreedom_classification classifies patients has having seizures or being seizure free using the HPI and/or Interval History paragraphs from a medical note. | {} | CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
| emilyalsentzer/Bio_ClinicalBERT with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing."
Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, URL
Bio_ClinicalBERT_for_seizureFreedom_classification classifies patients has having seizures or being seizure free using the HPI and/or Interval History paragraphs from a medical note. | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] | [
32
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
question-answering | transformers | RoBERTa-base with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing."
Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018
RoBERTa_for_seizureFrequency_QA performs extractive question answering to identify a patient's seizure freedom and/or date of last seizure using the HPI and/or Interval History paragraphs from a medical note. | {} | CNT-UPenn/RoBERTa_for_seizureFrequency_QA | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us
| RoBERTa-base with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing."
Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, URL
RoBERTa_for_seizureFrequency_QA performs extractive question answering to identify a patient's seizure freedom and/or date of last seizure using the HPI and/or Interval History paragraphs from a medical note. | [] | [
"TAGS\n#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us \n"
] | [
23
] | [
"TAGS\n#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | # XLM-Align
**Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment** (ACL-2021, [paper](https://arxiv.org/pdf/2106.06381.pdf), [github](https://github.com/CZWin32768/XLM-Align))
XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our [paper](https://arxiv.org/pdf/2106.06381.pdf).
## Example
```
model = = AutoModel.from_pretrained("CZWin32768/xlm-align")
```
## Evaluation Results
XTREME cross-lingual understanding tasks:
| Model | POS | NER | XQuAD | MLQA | TyDiQA | XNLI | PAWS-X | Avg |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| XLM-R_base | 75.6 | 61.8 | 71.9 / 56.4 | 65.1 / 47.2 | 55.4 / 38.3 | 75.0 | 84.9 | 66.4 |
| XLM-Align | **76.0** | **63.7** | **74.7 / 59.0** | **68.1 / 49.8** | **62.1 / 44.8** | **76.2** | **86.8** | **68.9** |
## MD5
```
b9d214025837250ede2f69c9385f812c config.json
6005db708eb4bab5b85fa3976b9db85b pytorch_model.bin
bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model
eedbd60a7268b9fc45981b849664f747 tokenizer.json
```
## About
Contact: chizewen\@outlook.com
BibTeX:
```
@article{xlmalign,
title={Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment},
author={Zewen Chi and Li Dong and Bo Zheng and Shaohan Huang and Xian-Ling Mao and Heyan Huang and Furu Wei},
journal={arXiv preprint arXiv:2106.06381},
year={2021}
}
``` | {} | CZWin32768/xlm-align | null | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2106.06381",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.06381"
] | [] | TAGS
#transformers #pytorch #xlm-roberta #fill-mask #arxiv-2106.06381 #autotrain_compatible #endpoints_compatible #region-us
| XLM-Align
=========
Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment (ACL-2021, paper, github)
XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our paper.
Example
-------
Evaluation Results
------------------
XTREME cross-lingual understanding tasks:
MD5
---
About
-----
Contact: chizewen@URL
BibTeX:
| [] | [
"TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #arxiv-2106.06381 #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
42
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #arxiv-2106.06381 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
summarization | transformers |
# Paper Title Generator
Generates titles for computer science papers given an abstract.
The model is a BERT2BERT Encoder-Decoder using the official `bert-base-uncased` checkpoint as initialization for the encoder and decoder.
It was fine-tuned on 318,500 computer science papers posted on arXiv.org between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.
**Live Demo:** [https://paper-titles.ey.r.appspot.com/](https://paper-titles.ey.r.appspot.com/) | {"language": ["en"], "license": "apache-2.0", "tags": ["summarization"], "datasets": ["arxiv_dataset"], "metrics": ["rouge"], "widget": [{"text": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}]} | Callidior/bert2bert-base-arxiv-titlegen | null | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:arxiv_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #en #dataset-arxiv_dataset #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Paper Title Generator
Generates titles for computer science papers given an abstract.
The model is a BERT2BERT Encoder-Decoder using the official 'bert-base-uncased' checkpoint as initialization for the encoder and decoder.
It was fine-tuned on 318,500 computer science papers posted on URL between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.
Live Demo: URL.r.URL | [
"# Paper Title Generator\n\nGenerates titles for computer science papers given an abstract.\n\nThe model is a BERT2BERT Encoder-Decoder using the official 'bert-base-uncased' checkpoint as initialization for the encoder and decoder.\nIt was fine-tuned on 318,500 computer science papers posted on URL between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.\n\nLive Demo: URL.r.URL"
] | [
"TAGS\n#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #en #dataset-arxiv_dataset #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Paper Title Generator\n\nGenerates titles for computer science papers given an abstract.\n\nThe model is a BERT2BERT Encoder-Decoder using the official 'bert-base-uncased' checkpoint as initialization for the encoder and decoder.\nIt was fine-tuned on 318,500 computer science papers posted on URL between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.\n\nLive Demo: URL.r.URL"
] | [
67,
101
] | [
"TAGS\n#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #en #dataset-arxiv_dataset #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# Paper Title Generator\n\nGenerates titles for computer science papers given an abstract.\n\nThe model is a BERT2BERT Encoder-Decoder using the official 'bert-base-uncased' checkpoint as initialization for the encoder and decoder.\nIt was fine-tuned on 318,500 computer science papers posted on URL between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.\n\nLive Demo: URL.r.URL"
] |
text-generation | transformers | A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01
For more information see: https://github.com/CallumRai/Hansard/ | {} | CallumRai/HansardGPT2 | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01
For more information see: URL | [] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
38
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0393
- Rouge1: 17.2936
- Rouge2: 8.0678
- Rougel: 16.8129
- Rougelsum: 16.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.6665 | 1.0 | 1209 | 3.2917 | 13.912 | 5.595 | 13.2984 | 13.4171 |
| 3.8961 | 2.0 | 2418 | 3.1711 | 16.2845 | 8.6033 | 15.5509 | 15.7383 |
| 3.5801 | 3.0 | 3627 | 3.0917 | 17.316 | 8.122 | 16.697 | 16.773 |
| 3.4258 | 4.0 | 4836 | 3.0583 | 16.1347 | 7.7829 | 15.6475 | 15.7804 |
| 3.3154 | 5.0 | 6045 | 3.0573 | 17.5918 | 8.7349 | 17.0537 | 17.2216 |
| 3.2438 | 6.0 | 7254 | 3.0479 | 17.2294 | 8.0383 | 16.8141 | 16.9858 |
| 3.2024 | 7.0 | 8463 | 3.0377 | 17.2918 | 8.139 | 16.8178 | 16.9671 |
| 3.1745 | 8.0 | 9672 | 3.0393 | 17.2936 | 8.0678 | 16.8129 | 16.9991 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]} | CalvinHuang/mt5-small-finetuned-amazon-en-es | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-finetuned-amazon-en-es
================================
This model is a fine-tuned version of google/mt5-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.0393
* Rouge1: 17.2936
* Rouge2: 8.0678
* Rougel: 16.8129
* Rougelsum: 16.9991
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
58,
103,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
text-generation | transformers |
# MaamiBot | {"tags": ["conversational"]} | Camzure/MaamiBot-test | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# MaamiBot | [
"# MaamiBot"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# MaamiBot"
] | [
39,
4
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# MaamiBot"
] |
text-generation | transformers |
# Jesse (Breaking Bad) DialoGPT Model | {"tags": ["conversational"]} | Canadiancaleb/DialoGPT-small-jesse | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jesse (Breaking Bad) DialoGPT Model | [
"# Jesse (Breaking Bad) DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jesse (Breaking Bad) DialoGPT Model"
] | [
39,
10
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jesse (Breaking Bad) DialoGPT Model"
] |
text-generation | transformers |
# Walter (Breaking Bad) DialoGPT Model | {"tags": ["conversational"]} | Canadiancaleb/DialoGPT-small-walter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Walter (Breaking Bad) DialoGPT Model | [
"# Walter (Breaking Bad) DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Walter (Breaking Bad) DialoGPT Model"
] | [
39,
10
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Walter (Breaking Bad) DialoGPT Model"
] |
text-classification | transformers | # capreolus/bert-base-msmarco
## Model description
BERT-Base model (`google/bert_uncased_L-12_H-768_A-12`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model; see the [Capreolus BERT-MaxP implementation](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) for a usage example.
This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_bert_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
| {} | Capreolus/bert-base-msmarco | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2008.09093"
] | [] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #arxiv-2008.09093 #autotrain_compatible #endpoints_compatible #region-us
| # capreolus/bert-base-msmarco
## Model description
BERT-Base model ('google/bert_uncased_L-12_H-768_A-12') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model; see the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights.
| [
"# capreolus/bert-base-msmarco",
"## Model description\nBERT-Base model ('google/bert_uncased_L-12_H-768_A-12') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model; see the Capreolus BERT-MaxP implementation for a usage example.\n\nThis corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #arxiv-2008.09093 #autotrain_compatible #endpoints_compatible #region-us \n",
"# capreolus/bert-base-msmarco",
"## Model description\nBERT-Base model ('google/bert_uncased_L-12_H-768_A-12') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model; see the Capreolus BERT-MaxP implementation for a usage example.\n\nThis corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights."
] | [
44,
13,
134
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #arxiv-2008.09093 #autotrain_compatible #endpoints_compatible #region-us \n# capreolus/bert-base-msmarco## Model description\nBERT-Base model ('google/bert_uncased_L-12_H-768_A-12') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model; see the Capreolus BERT-MaxP implementation for a usage example.\n\nThis corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights."
] |
text-classification | transformers | # capreolus/electra-base-msmarco
## Model description
ELECTRA-Base model (`google/electra-base-discriminator`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the [TFElectraRelevanceHead](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) in the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_electra_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
| {} | Capreolus/electra-base-msmarco | null | [
"transformers",
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2008.09093"
] | [] | TAGS
#transformers #pytorch #tf #electra #text-classification #arxiv-2008.09093 #autotrain_compatible #endpoints_compatible #region-us
| # capreolus/electra-base-msmarco
## Model description
ELECTRA-Base model ('google/electra-base-discriminator') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights.
| [
"# capreolus/electra-base-msmarco",
"## Model description\nELECTRA-Base model ('google/electra-base-discriminator') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation for a usage example.\n\nThis corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights."
] | [
"TAGS\n#transformers #pytorch #tf #electra #text-classification #arxiv-2008.09093 #autotrain_compatible #endpoints_compatible #region-us \n",
"# capreolus/electra-base-msmarco",
"## Model description\nELECTRA-Base model ('google/electra-base-discriminator') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation for a usage example.\n\nThis corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights."
] | [
43,
14,
155
] | [
"TAGS\n#transformers #pytorch #tf #electra #text-classification #arxiv-2008.09093 #autotrain_compatible #endpoints_compatible #region-us \n# capreolus/electra-base-msmarco## Model description\nELECTRA-Base model ('google/electra-base-discriminator') fine-tuned on the MS MARCO passage classification task. It is intended to be used as a 'ForSequenceClassification' model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation for a usage example.\n\nThis corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights."
] |
text-classification | transformers | # Master Thesis
## Predictive Value of Sentiment Analysis from Headlines for Crude Oil Prices
### Understanding and Exploiting Deep Learning-based Sentiment Analysis from News Headlines for Predicting Price Movements of WTI Crude Oil
The focus of this thesis deals with the task of research and development of state-of-the-art sentiment analysis methods, which can potentially provide helpful quantification of news that can be used to assess the future price movements of crude oil.
CrudeBERT is a pre-trained NLP model to analyze sentiment of news headlines relevant to crude oil.
It was developed by fine tuning [FinBERT: Financial Sentiment Analysis with Pre-trained Language Models](https://arxiv.org/pdf/1908.10063.pdf).
![CrudeBERT comparison_white_2](https://user-images.githubusercontent.com/42164041/135273552-4a9c4457-70e4-48d0-ac97-169daefab79e.png)
Performing sentiment analysis on the news regarding a specific asset requires domain adaptation.
Domain adaptation requires training data made up of examples with text and its associated polarity of sentiment.
The experiments show that pre-trained deep learning-based sentiment analysis can be further fine-tuned, and the conclusions of these experiments are as follows:
* Deep learning-based sentiment analysis models from the general financial world such as FinBERT are of little or hardly any significance concerning the price development of crude oil. The reason behind this is a lack of domain adaptation of the sentiment. Moreover, the polarity of sentiment cannot be generalized and is highly dependent on the properties of its target.
* The properties of crude oil prices are, according to the literature, determined by changes in supply and demand.
News can convey information about these direction changes and can broadly be identified through query searches and serve as a foundation for creating a training dataset to perform domain adaptation. For this purpose, news headlines tend to be rich enough in content to provide insights into supply and demand changes.
Even when significantly reducing the number of headlines to more reputable sources.
* Domain adaptation can be achieved to some extend by analyzing the properties of the target through literature review and creating a corresponding training dataset to fine-tune the model. For example, considering supply and demand changes regarding crude oil seems to be a suitable component for a domain adaptation.
In order to advance sentiment analysis applications in the domain of crude oil, this paper presents CrudeBERT.
In general, sentiment analysis of headlines from crude oil through CrudeBERT could be a viable source of insight for the price behaviour of WTI crude oil.
However, further research is required to see if CrudeBERT can serve as beneficial for predicting oil prices.
For this matter, the codes and the thesis is made publicly available on [GitHub] (https://github.com/Captain-1337/Master-Thesis). | {} | Captain-1337/CrudeBERT | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1908.10063",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1908.10063"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-1908.10063 #autotrain_compatible #endpoints_compatible #region-us
| # Master Thesis
## Predictive Value of Sentiment Analysis from Headlines for Crude Oil Prices
### Understanding and Exploiting Deep Learning-based Sentiment Analysis from News Headlines for Predicting Price Movements of WTI Crude Oil
The focus of this thesis deals with the task of research and development of state-of-the-art sentiment analysis methods, which can potentially provide helpful quantification of news that can be used to assess the future price movements of crude oil.
CrudeBERT is a pre-trained NLP model to analyze sentiment of news headlines relevant to crude oil.
It was developed by fine tuning FinBERT: Financial Sentiment Analysis with Pre-trained Language Models.
!CrudeBERT comparison_white_2
Performing sentiment analysis on the news regarding a specific asset requires domain adaptation.
Domain adaptation requires training data made up of examples with text and its associated polarity of sentiment.
The experiments show that pre-trained deep learning-based sentiment analysis can be further fine-tuned, and the conclusions of these experiments are as follows:
* Deep learning-based sentiment analysis models from the general financial world such as FinBERT are of little or hardly any significance concerning the price development of crude oil. The reason behind this is a lack of domain adaptation of the sentiment. Moreover, the polarity of sentiment cannot be generalized and is highly dependent on the properties of its target.
* The properties of crude oil prices are, according to the literature, determined by changes in supply and demand.
News can convey information about these direction changes and can broadly be identified through query searches and serve as a foundation for creating a training dataset to perform domain adaptation. For this purpose, news headlines tend to be rich enough in content to provide insights into supply and demand changes.
Even when significantly reducing the number of headlines to more reputable sources.
* Domain adaptation can be achieved to some extend by analyzing the properties of the target through literature review and creating a corresponding training dataset to fine-tune the model. For example, considering supply and demand changes regarding crude oil seems to be a suitable component for a domain adaptation.
In order to advance sentiment analysis applications in the domain of crude oil, this paper presents CrudeBERT.
In general, sentiment analysis of headlines from crude oil through CrudeBERT could be a viable source of insight for the price behaviour of WTI crude oil.
However, further research is required to see if CrudeBERT can serve as beneficial for predicting oil prices.
For this matter, the codes and the thesis is made publicly available on [GitHub] (URL | [
"# Master Thesis",
"## Predictive Value of Sentiment Analysis from Headlines for Crude Oil Prices",
"### Understanding and Exploiting Deep Learning-based Sentiment Analysis from News Headlines for Predicting Price Movements of WTI Crude Oil\n\nThe focus of this thesis deals with the task of research and development of state-of-the-art sentiment analysis methods, which can potentially provide helpful quantification of news that can be used to assess the future price movements of crude oil. \n\nCrudeBERT is a pre-trained NLP model to analyze sentiment of news headlines relevant to crude oil. \nIt was developed by fine tuning FinBERT: Financial Sentiment Analysis with Pre-trained Language Models.\n\n!CrudeBERT comparison_white_2\n\nPerforming sentiment analysis on the news regarding a specific asset requires domain adaptation. \nDomain adaptation requires training data made up of examples with text and its associated polarity of sentiment. \nThe experiments show that pre-trained deep learning-based sentiment analysis can be further fine-tuned, and the conclusions of these experiments are as follows: \n\n* Deep learning-based sentiment analysis models from the general financial world such as FinBERT are of little or hardly any significance concerning the price development of crude oil. The reason behind this is a lack of domain adaptation of the sentiment. Moreover, the polarity of sentiment cannot be generalized and is highly dependent on the properties of its target. \n\n* The properties of crude oil prices are, according to the literature, determined by changes in supply and demand. \nNews can convey information about these direction changes and can broadly be identified through query searches and serve as a foundation for creating a training dataset to perform domain adaptation. For this purpose, news headlines tend to be rich enough in content to provide insights into supply and demand changes. \nEven when significantly reducing the number of headlines to more reputable sources. \n\n* Domain adaptation can be achieved to some extend by analyzing the properties of the target through literature review and creating a corresponding training dataset to fine-tune the model. For example, considering supply and demand changes regarding crude oil seems to be a suitable component for a domain adaptation. \n\nIn order to advance sentiment analysis applications in the domain of crude oil, this paper presents CrudeBERT. \nIn general, sentiment analysis of headlines from crude oil through CrudeBERT could be a viable source of insight for the price behaviour of WTI crude oil. \nHowever, further research is required to see if CrudeBERT can serve as beneficial for predicting oil prices. \nFor this matter, the codes and the thesis is made publicly available on [GitHub] (URL"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-1908.10063 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Master Thesis",
"## Predictive Value of Sentiment Analysis from Headlines for Crude Oil Prices",
"### Understanding and Exploiting Deep Learning-based Sentiment Analysis from News Headlines for Predicting Price Movements of WTI Crude Oil\n\nThe focus of this thesis deals with the task of research and development of state-of-the-art sentiment analysis methods, which can potentially provide helpful quantification of news that can be used to assess the future price movements of crude oil. \n\nCrudeBERT is a pre-trained NLP model to analyze sentiment of news headlines relevant to crude oil. \nIt was developed by fine tuning FinBERT: Financial Sentiment Analysis with Pre-trained Language Models.\n\n!CrudeBERT comparison_white_2\n\nPerforming sentiment analysis on the news regarding a specific asset requires domain adaptation. \nDomain adaptation requires training data made up of examples with text and its associated polarity of sentiment. \nThe experiments show that pre-trained deep learning-based sentiment analysis can be further fine-tuned, and the conclusions of these experiments are as follows: \n\n* Deep learning-based sentiment analysis models from the general financial world such as FinBERT are of little or hardly any significance concerning the price development of crude oil. The reason behind this is a lack of domain adaptation of the sentiment. Moreover, the polarity of sentiment cannot be generalized and is highly dependent on the properties of its target. \n\n* The properties of crude oil prices are, according to the literature, determined by changes in supply and demand. \nNews can convey information about these direction changes and can broadly be identified through query searches and serve as a foundation for creating a training dataset to perform domain adaptation. For this purpose, news headlines tend to be rich enough in content to provide insights into supply and demand changes. \nEven when significantly reducing the number of headlines to more reputable sources. \n\n* Domain adaptation can be achieved to some extend by analyzing the properties of the target through literature review and creating a corresponding training dataset to fine-tune the model. For example, considering supply and demand changes regarding crude oil seems to be a suitable component for a domain adaptation. \n\nIn order to advance sentiment analysis applications in the domain of crude oil, this paper presents CrudeBERT. \nIn general, sentiment analysis of headlines from crude oil through CrudeBERT could be a viable source of insight for the price behaviour of WTI crude oil. \nHowever, further research is required to see if CrudeBERT can serve as beneficial for predicting oil prices. \nFor this matter, the codes and the thesis is made publicly available on [GitHub] (URL"
] | [
38,
3,
14,
489
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-1908.10063 #autotrain_compatible #endpoints_compatible #region-us \n# Master Thesis## Predictive Value of Sentiment Analysis from Headlines for Crude Oil Prices### Understanding and Exploiting Deep Learning-based Sentiment Analysis from News Headlines for Predicting Price Movements of WTI Crude Oil\n\nThe focus of this thesis deals with the task of research and development of state-of-the-art sentiment analysis methods, which can potentially provide helpful quantification of news that can be used to assess the future price movements of crude oil. \n\nCrudeBERT is a pre-trained NLP model to analyze sentiment of news headlines relevant to crude oil. \nIt was developed by fine tuning FinBERT: Financial Sentiment Analysis with Pre-trained Language Models.\n\n!CrudeBERT comparison_white_2\n\nPerforming sentiment analysis on the news regarding a specific asset requires domain adaptation. \nDomain adaptation requires training data made up of examples with text and its associated polarity of sentiment. \nThe experiments show that pre-trained deep learning-based sentiment analysis can be further fine-tuned, and the conclusions of these experiments are as follows: \n\n* Deep learning-based sentiment analysis models from the general financial world such as FinBERT are of little or hardly any significance concerning the price development of crude oil. The reason behind this is a lack of domain adaptation of the sentiment. Moreover, the polarity of sentiment cannot be generalized and is highly dependent on the properties of its target. \n\n* The properties of crude oil prices are, according to the literature, determined by changes in supply and demand. \nNews can convey information about these direction changes and can broadly be identified through query searches and serve as a foundation for creating a training dataset to perform domain adaptation. For this purpose, news headlines tend to be rich enough in content to provide insights into supply and demand changes. \nEven when significantly reducing the number of headlines to more reputable sources. \n\n* Domain adaptation can be achieved to some extend by analyzing the properties of the target through literature review and creating a corresponding training dataset to fine-tune the model. For example, considering supply and demand changes regarding crude oil seems to be a suitable component for a domain adaptation. \n\nIn order to advance sentiment analysis applications in the domain of crude oil, this paper presents CrudeBERT. \nIn general, sentiment analysis of headlines from crude oil through CrudeBERT could be a viable source of insight for the price behaviour of WTI crude oil. \nHowever, further research is required to see if CrudeBERT can serve as beneficial for predicting oil prices. \nFor this matter, the codes and the thesis is made publicly available on [GitHub] (URL"
] |
text2text-generation | transformers | **mt5-spanish-memmories-analysis**
**// ES**
Este es un trabajo en proceso.
Este modelo aún es solo un punto de control inicial que mejoraré en los próximos meses.
El objetivo es proporcionar un modelo capaz de, utilizando una combinación de tareas del modelo mT5, comprender los recuerdos y proporcionar una interacción útil para las personas con alzeimer o personas como mi propio abuelo que escribió sus recuerdos, pero ahora es solo un libro en la estantería. por lo que este modelo puede hacer que esos recuerdos parezcan "vivos".
Pronto (si aún no está cargado) cargaré un cuaderno de **Google Colaboratory con una aplicación visual** que al usar este modelo proporcionará toda la interacción necesaria y deseada con una interfaz fácil de usar.
**LINK APLICACIÓN (sobre él se actualizará la versión):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing
-> Debe descargarse la carpeta "memorium" del enlace y subirse a Google Drive sin incluir en ninguna otra carpeta (directamente en "Mi unidad").
-> A continuación se podrá abrir la app, encontrada dentro de dicha carpeta "memorium" con nombre "APP-Memorium" (el nombre puede incluir además un indicador de versión).
-> Si haciendo doble click en el archivo de la app no permite abrirla, debe hacerse pulsando el botón derecho sobre el archivo y seleccionar "Abrir con", "Conectar más aplicaciones", y a continuación escoger Colaboratory (se pedirá instalar). Completada la instalación (tiempo aproximado: 2 minutos) se podrá cerrar la ventana de instalación para volver a visualizar la carpeta donde se encuentra el fichero de la app, que de ahora en adelante se podrá abrir haciendo doble click.
-> Se podrán añadir memorias en la carpeta "perfiles" como se indica en la aplicación en el apartado "crear perfil".
**// EN**
This is a work in process.
This model is just an initial checkpoint yet that I will be improving the following months.
**APP LINK (it will contain the latest version):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing
-> The folder "memorium" must be downloaded and then uploaded to Google Drive at "My Drive", NOT inside any other folder.
The aim is to provide a model able to, using a mixture of mT5 model's tasks, understand memories and provide an interaction useful for people with alzeimer or people like my own grandfather who wrote his memories but it is now just a book in the shelf, so this model can make those memories seem 'alive'.
I will soon (if it is´t uploaded by now) upload a **Google Colaboratory notebook with a visual App** that using this model will provide all the needed and wanted interaction with an easy-to-use Interface.
| {} | CarlosPR/mt5-spanish-memmories-analysis | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-spanish-memmories-analysis
// ES
Este es un trabajo en proceso.
Este modelo aún es solo un punto de control inicial que mejoraré en los próximos meses.
El objetivo es proporcionar un modelo capaz de, utilizando una combinación de tareas del modelo mT5, comprender los recuerdos y proporcionar una interacción útil para las personas con alzeimer o personas como mi propio abuelo que escribió sus recuerdos, pero ahora es solo un libro en la estantería. por lo que este modelo puede hacer que esos recuerdos parezcan "vivos".
Pronto (si aún no está cargado) cargaré un cuaderno de Google Colaboratory con una aplicación visual que al usar este modelo proporcionará toda la interacción necesaria y deseada con una interfaz fácil de usar.
LINK APLICACIÓN (sobre él se actualizará la versión): URL
-> Debe descargarse la carpeta "memorium" del enlace y subirse a Google Drive sin incluir en ninguna otra carpeta (directamente en "Mi unidad").
-> A continuación se podrá abrir la app, encontrada dentro de dicha carpeta "memorium" con nombre "APP-Memorium" (el nombre puede incluir además un indicador de versión).
-> Si haciendo doble click en el archivo de la app no permite abrirla, debe hacerse pulsando el botón derecho sobre el archivo y seleccionar "Abrir con", "Conectar más aplicaciones", y a continuación escoger Colaboratory (se pedirá instalar). Completada la instalación (tiempo aproximado: 2 minutos) se podrá cerrar la ventana de instalación para volver a visualizar la carpeta donde se encuentra el fichero de la app, que de ahora en adelante se podrá abrir haciendo doble click.
-> Se podrán añadir memorias en la carpeta "perfiles" como se indica en la aplicación en el apartado "crear perfil".
// EN
This is a work in process.
This model is just an initial checkpoint yet that I will be improving the following months.
APP LINK (it will contain the latest version): URL
-> The folder "memorium" must be downloaded and then uploaded to Google Drive at "My Drive", NOT inside any other folder.
The aim is to provide a model able to, using a mixture of mT5 model's tasks, understand memories and provide an interaction useful for people with alzeimer or people like my own grandfather who wrote his memories but it is now just a book in the shelf, so this model can make those memories seem 'alive'.
I will soon (if it is´t uploaded by now) upload a Google Colaboratory notebook with a visual App that using this model will provide all the needed and wanted interaction with an easy-to-use Interface.
| [] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
37
] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Harry potter DialoGPT Model | {"tags": ["conversational"]} | CasualHomie/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry potter DialoGPT Model | [
"# Harry potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry potter DialoGPT Model"
] |
automatic-speech-recognition | transformers |
# Cdial/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.275118
- Wer: 0.329955
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.175900 | 2.750914 | 1.000000 |
| 1000 | 1.028700 | 0.338649 | 0.497999 |
| 1500 | 0.332200 | 0.246896 | 0.402241 |
| 2000 | 0.227300 | 0.239640 | 0.395839 |
| 2500 | 0.175000 | 0.239577 | 0.373966 |
| 3000 | 0.140400 | 0.243272 | 0.356095 |
| 3500 | 0.119200 | 0.263761 | 0.365164 |
| 4000 | 0.099300 | 0.265954 | 0.353428 |
| 4500 | 0.084400 | 0.276367 | 0.349693 |
| 5000 | 0.073700 | 0.282631 | 0.343825 |
| 5500 | 0.068000 | 0.282344 | 0.341158 |
| 6000 | 0.064500 | 0.281591 | 0.342491 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
| {"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ha", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Cdial/Hausa_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}]}]} | Cdial/hausa-asr | null | [
"transformers",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ha",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ha"
] | TAGS
#transformers #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ha #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Cdial/Hausa\_xlsr
=================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
* Loss: 0.275118
* Wer: 0.329955
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice Hausa URL, URL, URL, URL and URL
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000096
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 2
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ha #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
97,
123,
5,
47,
34
] | [
"TAGS\n#transformers #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ha #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
text-generation | transformers |
# Cedille AI
Cedille is a project to bring large language models to non-English languages.
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase.
Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset. We started training from GPT-J, which has been trained on [The Pile](https://pile.eleuther.ai/). As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.
Boris is named after the great French writer [Boris Vian](https://en.wikipedia.org/wiki/Boris_Vian).
# How do I test Cedille?
For the time being, the easiest way to test the model is to use our [publicly accessible playground](https://en.cedille.ai/).
Cedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@cedille.ai.
## 📊 Cedille paper
Our paper is out now! https://arxiv.org/abs/2202.03371
Thanks for citing our work if you make use of Cedille
```bibtex
@misc{muller2022cedille,
title={Cedille: A large autoregressive French language model},
author={Martin M{\"{u}}ller and Florian Laurent},
year={2022},
eprint={2202.03371},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
For any custom development please contact us at hello@cedille.ai.
## Links
* [Official website](https://en.cedille.ai/)
* [Blog](https://en.cedille.ai/blog)
* [GitHub](https://github.com/coteries/cedille-ai)
* [Twitter](https://twitter.com/CedilleAI)
| {"language": "fr", "license": "mit", "tags": ["pytorch", "causal-lm"], "datasets": ["c4"]} | Cedille/fr-boris | null | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.03371"
] | [
"fr"
] | TAGS
#transformers #pytorch #gptj #text-generation #causal-lm #fr #dataset-c4 #arxiv-2202.03371 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Cedille AI
Cedille is a project to bring large language models to non-English languages.
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the mesh-transformer-jax codebase.
Boris was trained on around 78B tokens of French text from the C4 dataset. We started training from GPT-J, which has been trained on The Pile. As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.
Boris is named after the great French writer Boris Vian.
# How do I test Cedille?
For the time being, the easiest way to test the model is to use our publicly accessible playground.
Cedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@URL.
## Cedille paper
Our paper is out now! URL
Thanks for citing our work if you make use of Cedille
## Contact us
For any custom development please contact us at hello@URL.
## Links
* Official website
* Blog
* GitHub
* Twitter
| [
"# Cedille AI\nCedille is a project to bring large language models to non-English languages.",
"## fr-boris\nBoris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the mesh-transformer-jax codebase.\n\nBoris was trained on around 78B tokens of French text from the C4 dataset. We started training from GPT-J, which has been trained on The Pile. As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.\n\nBoris is named after the great French writer Boris Vian.",
"# How do I test Cedille?\nFor the time being, the easiest way to test the model is to use our publicly accessible playground.\n\nCedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@URL.",
"## Cedille paper\nOur paper is out now! URL\n\nThanks for citing our work if you make use of Cedille",
"## Contact us\nFor any custom development please contact us at hello@URL.",
"## Links\n* Official website\n* Blog\n* GitHub\n* Twitter"
] | [
"TAGS\n#transformers #pytorch #gptj #text-generation #causal-lm #fr #dataset-c4 #arxiv-2202.03371 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Cedille AI\nCedille is a project to bring large language models to non-English languages.",
"## fr-boris\nBoris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the mesh-transformer-jax codebase.\n\nBoris was trained on around 78B tokens of French text from the C4 dataset. We started training from GPT-J, which has been trained on The Pile. As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.\n\nBoris is named after the great French writer Boris Vian.",
"# How do I test Cedille?\nFor the time being, the easiest way to test the model is to use our publicly accessible playground.\n\nCedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@URL.",
"## Cedille paper\nOur paper is out now! URL\n\nThanks for citing our work if you make use of Cedille",
"## Contact us\nFor any custom development please contact us at hello@URL.",
"## Links\n* Official website\n* Blog\n* GitHub\n* Twitter"
] | [
62,
22,
116,
59,
27,
17,
14
] | [
"TAGS\n#transformers #pytorch #gptj #text-generation #causal-lm #fr #dataset-c4 #arxiv-2202.03371 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# Cedille AI\nCedille is a project to bring large language models to non-English languages.## fr-boris\nBoris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the mesh-transformer-jax codebase.\n\nBoris was trained on around 78B tokens of French text from the C4 dataset. We started training from GPT-J, which has been trained on The Pile. As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.\n\nBoris is named after the great French writer Boris Vian.# How do I test Cedille?\nFor the time being, the easiest way to test the model is to use our publicly accessible playground.\n\nCedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@URL.## Cedille paper\nOur paper is out now! URL\n\nThanks for citing our work if you make use of Cedille## Contact us\nFor any custom development please contact us at hello@URL.## Links\n* Official website\n* Blog\n* GitHub\n* Twitter"
] |
null | transformers |
# ALBERT Base Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0008838834765
- Batch Size: 960
- Warmup ratio: 0.00625
- Warmup steps: 53333.33333
- Goal steps: 8533333.333
- Total steps: 3650000
- Total training time (aprox): 70.4 days.
## Training loss
![https://drive.google.com/uc?export=view&id=1IsxcgMwd7Hl-3bSnNl8W9jUrHJeHtZql](https://drive.google.com/uc?export=view&id=1IsxcgMwd7Hl-3bSnNl8W9jUrHJeHtZql) | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-base-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us
|
# ALBERT Base Spanish
This is an ALBERT model trained on a big spanish corpora.
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0008838834765
- Batch Size: 960
- Warmup ratio: 0.00625
- Warmup steps: 53333.33333
- Goal steps: 8533333.333
- Total steps: 3650000
- Total training time (aprox): 70.4 days.
## Training loss
!URL | [
"# ALBERT Base Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0008838834765\n- Batch Size: 960\n- Warmup ratio: 0.00625\n- Warmup steps: 53333.33333\n- Goal steps: 8533333.333\n- Total steps: 3650000\n- Total training time (aprox): 70.4 days.",
"## Training loss\n\n!URL"
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n",
"# ALBERT Base Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0008838834765\n- Batch Size: 960\n- Warmup ratio: 0.00625\n- Warmup steps: 53333.33333\n- Goal steps: 8533333.333\n- Total steps: 3650000\n- Total training time (aprox): 70.4 days.",
"## Training loss\n\n!URL"
] | [
43,
113,
7
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n# ALBERT Base Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0008838834765\n- Batch Size: 960\n- Warmup ratio: 0.00625\n- Warmup steps: 53333.33333\n- Goal steps: 8533333.333\n- Total steps: 3650000\n- Total training time (aprox): 70.4 days.## Training loss\n\n!URL"
] |
null | transformers |
# ALBERT Large Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.000625
- Batch Size: 512
- Warmup ratio: 0.003125
- Warmup steps: 12500
- Goal steps: 4000000
- Total steps: 1450000
- Total training time (aprox): 42 days.
## Training loss
![https://drive.google.com/uc?export=view&id=10EiI0Yge3U3CnGrqoMs1yJY020pPz_Io](https://drive.google.com/uc?export=view&id=10EiI0Yge3U3CnGrqoMs1yJY020pPz_Io)
| {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-large-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us
|
# ALBERT Large Spanish
This is an ALBERT model trained on a big spanish corpora.
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.000625
- Batch Size: 512
- Warmup ratio: 0.003125
- Warmup steps: 12500
- Goal steps: 4000000
- Total steps: 1450000
- Total training time (aprox): 42 days.
## Training loss
!URL
| [
"# ALBERT Large Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.000625\n- Batch Size: 512\n- Warmup ratio: 0.003125\n- Warmup steps: 12500\n- Goal steps: 4000000\n- Total steps: 1450000\n- Total training time (aprox): 42 days.",
"## Training loss\n\n!URL"
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n",
"# ALBERT Large Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.000625\n- Batch Size: 512\n- Warmup ratio: 0.003125\n- Warmup steps: 12500\n- Goal steps: 4000000\n- Total steps: 1450000\n- Total training time (aprox): 42 days.",
"## Training loss\n\n!URL"
] | [
43,
99,
7
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n# ALBERT Large Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.000625\n- Batch Size: 512\n- Warmup ratio: 0.003125\n- Warmup steps: 12500\n- Goal steps: 4000000\n- Total steps: 1450000\n- Total training time (aprox): 42 days.## Training loss\n\n!URL"
] |
null | transformers |
# ALBERT Tiny Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.00125
- Batch Size: 2048
- Warmup ratio: 0.0125
- Warmup steps: 125000
- Goal steps: 10000000
- Total steps: 8300000
- Total training time (aprox): 58.2 days
## Training loss
![https://drive.google.com/uc?export=view&id=1KQc8yWZLKvDLjBtu4IOAgpTx0iLcvX_Q](https://drive.google.com/uc?export=view&id=1KQc8yWZLKvDLjBtu4IOAgpTx0iLcvX_Q) | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-tiny-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us
|
# ALBERT Tiny Spanish
This is an ALBERT model trained on a big spanish corpora.
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.00125
- Batch Size: 2048
- Warmup ratio: 0.0125
- Warmup steps: 125000
- Goal steps: 10000000
- Total steps: 8300000
- Total training time (aprox): 58.2 days
## Training loss
!URL | [
"# ALBERT Tiny Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.00125\n- Batch Size: 2048\n- Warmup ratio: 0.0125\n- Warmup steps: 125000\n- Goal steps: 10000000\n- Total steps: 8300000\n- Total training time (aprox): 58.2 days",
"## Training loss\n\n!URL"
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n",
"# ALBERT Tiny Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.00125\n- Batch Size: 2048\n- Warmup ratio: 0.0125\n- Warmup steps: 125000\n- Goal steps: 10000000\n- Total steps: 8300000\n- Total training time (aprox): 58.2 days",
"## Training loss\n\n!URL"
] | [
43,
101,
7
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n# ALBERT Tiny Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.00125\n- Batch Size: 2048\n- Warmup ratio: 0.0125\n- Warmup steps: 125000\n- Goal steps: 10000000\n- Total steps: 8300000\n- Total training time (aprox): 58.2 days## Training loss\n\n!URL"
] |
null | transformers |
# ALBERT XLarge Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 6250
- Goal steps: 8000000
- Total steps: 2775000
- Total training time (aprox): 64.2 days.
## Training loss
![https://drive.google.com/uc?export=view&id=1rw0vvqZY9LZAzRUACLjmP18Fc6D1fv7x](https://drive.google.com/uc?export=view&id=1rw0vvqZY9LZAzRUACLjmP18Fc6D1fv7x) | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-xlarge-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us
|
# ALBERT XLarge Spanish
This is an ALBERT model trained on a big spanish corpora.
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 6250
- Goal steps: 8000000
- Total steps: 2775000
- Total training time (aprox): 64.2 days.
## Training loss
!URL | [
"# ALBERT XLarge Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0003125\n- Batch Size: 128\n- Warmup ratio: 0.00078125\n- Warmup steps: 6250\n- Goal steps: 8000000\n- Total steps: 2775000\n- Total training time (aprox): 64.2 days.",
"## Training loss\n!URL"
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n",
"# ALBERT XLarge Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0003125\n- Batch Size: 128\n- Warmup ratio: 0.00078125\n- Warmup steps: 6250\n- Goal steps: 8000000\n- Total steps: 2775000\n- Total training time (aprox): 64.2 days.",
"## Training loss\n!URL"
] | [
43,
105,
7
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n# ALBERT XLarge Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0003125\n- Batch Size: 128\n- Warmup ratio: 0.00078125\n- Warmup steps: 6250\n- Goal steps: 8000000\n- Total steps: 2775000\n- Total training time (aprox): 64.2 days.## Training loss\n!URL"
] |
null | transformers |
# ALBERT XXLarge Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 3125
- Goal steps: 4000000
- Total steps: 1650000
- Total training time (aprox): 70.7 days.
## Training loss
![https://drive.google.com/uc?export=view&id=1a9MHsk-QwBuCMtyDyRvZ5mv9Mzl2dWCn](https://drive.google.com/uc?export=view&id=1a9MHsk-QwBuCMtyDyRvZ5mv9Mzl2dWCn) | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-xxlarge-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us
|
# ALBERT XXLarge Spanish
This is an ALBERT model trained on a big spanish corpora.
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 3125
- Goal steps: 4000000
- Total steps: 1650000
- Total training time (aprox): 70.7 days.
## Training loss
!URL | [
"# ALBERT XXLarge Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0003125\n- Batch Size: 128\n- Warmup ratio: 0.00078125\n- Warmup steps: 3125\n- Goal steps: 4000000\n- Total steps: 1650000\n- Total training time (aprox): 70.7 days.",
"## Training loss\n\n!URL"
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n",
"# ALBERT XXLarge Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0003125\n- Batch Size: 128\n- Warmup ratio: 0.00078125\n- Warmup steps: 3125\n- Goal steps: 4000000\n- Total steps: 1650000\n- Total training time (aprox): 70.7 days.",
"## Training loss\n\n!URL"
] | [
43,
105,
7
] | [
"TAGS\n#transformers #pytorch #tf #albert #pretraining #spanish #OpenCENIA #es #dataset-large_spanish_corpus #endpoints_compatible #region-us \n# ALBERT XXLarge Spanish\n\nThis is an ALBERT model trained on a big spanish corpora.\nThe model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:\n- LR: 0.0003125\n- Batch Size: 128\n- Warmup ratio: 0.00078125\n- Warmup steps: 3125\n- Goal steps: 4000000\n- Total steps: 1650000\n- Total training time (aprox): 70.7 days.## Training loss\n\n!URL"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recipe-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.2689 |
| No log | 2.0 | 6 | 3.0913 |
| No log | 3.0 | 9 | 3.0641 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-recipe-1", "results": []}]} | CennetOguz/distilbert-base-uncased-finetuned-recipe-1 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-recipe-1
==========================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.0641
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 256
* eval\_batch\_size: 256
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
47,
114,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recipe
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.2689 |
| No log | 2.0 | 6 | 3.0913 |
| No log | 3.0 | 9 | 3.0641 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-recipe", "results": []}]} | CennetOguz/distilbert-base-uncased-finetuned-recipe | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-recipe
========================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9488
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 256
* eval\_batch\_size: 256
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
47,
114,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation | transformers |
# Lego Batman DialoGPT Model | {"tags": ["conversational"]} | Chae/botman | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Lego Batman DialoGPT Model | [
"# Lego Batman DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Lego Batman DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Lego Batman DialoGPT Model"
] |
text-generation | transformers |
# Model trained on F.R.I.E.N.D.S dialogue | {"tags": ["conversational"]} | Chakita/Friends | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model trained on F.R.I.E.N.D.S dialogue | [
"# Model trained on F.R.I.E.N.D.S dialogue"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model trained on F.R.I.E.N.D.S dialogue"
] | [
43,
18
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# Model trained on F.R.I.E.N.D.S dialogue"
] |
fill-mask | transformers | Kannada BERT model finetuned on a news corpus
---
language:
- kn
thumbnail:
tags:
- Masked Language model
- Autocomplete
license: mit
datasets:
- custom data set of Kannada news
--- | {} | Chakita/KNUBert | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Kannada BERT model finetuned on a news corpus
---
language:
- kn
thumbnail:
tags:
- Masked Language model
- Autocomplete
license: mit
datasets:
- custom data set of Kannada news
--- | [] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
31
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | RoBERTa model trained on Kannada news corpus. | {"tags": ["masked-lm", "fill-in-the-blanks"]} | Chakita/KROBERT | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #masked-lm #fill-in-the-blanks #autotrain_compatible #endpoints_compatible #region-us
| RoBERTa model trained on Kannada news corpus. | [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #fill-in-the-blanks #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
42
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #fill-in-the-blanks #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Kalbert
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on a kannada news dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5835 | 1.0 | 3953 | 1.7985 |
| 1.6098 | 2.0 | 7906 | 1.7434 |
| 1.5266 | 3.0 | 11859 | 1.6934 |
| 1.5179 | 4.0 | 15812 | 1.6665 |
| 1.5459 | 5.0 | 19765 | 1.6135 |
| 1.5511 | 6.0 | 23718 | 1.6002 |
| 1.5209 | 7.0 | 27671 | 1.5657 |
| 1.5413 | 8.0 | 31624 | 1.5578 |
| 1.4828 | 9.0 | 35577 | 1.5465 |
| 1.4651 | 10.0 | 39530 | 1.5451 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "Kalbert", "results": []}]} | Chakita/Kalbert | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #albert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
| Kalbert
=======
This model is a fine-tuned version of ai4bharat/indic-bert on a kannada news dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5324
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.25.1
* Pytorch 1.13.0+cu116
* Datasets 2.8.0
* Tokenizers 0.13.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.25.1\n* Pytorch 1.13.0+cu116\n* Datasets 2.8.0\n* Tokenizers 0.13.2"
] | [
"TAGS\n#transformers #pytorch #tensorboard #albert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.25.1\n* Pytorch 1.13.0+cu116\n* Datasets 2.8.0\n* Tokenizers 0.13.2"
] | [
41,
112,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #albert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.25.1\n* Pytorch 1.13.0+cu116\n* Datasets 2.8.0\n* Tokenizers 0.13.2"
] |
fill-mask | transformers | RoBERTa model trained on OSCAR Kannada corpus. | {"tags": ["masked-lm", "fill-in-the-blanks"]} | Chakita/KannadaBERT | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #masked-lm #fill-in-the-blanks #autotrain_compatible #endpoints_compatible #region-us
| RoBERTa model trained on OSCAR Kannada corpus. | [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #fill-in-the-blanks #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
42
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #fill-in-the-blanks #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
#help why did i feed this bot the bee movie | {"tags": ["conversational"]} | Chalponkey/DialoGPT-small-Barry | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#help why did i feed this bot the bee movie | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | ChaseBread/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
null | null |
## Model based on
[Ko-GPT-Trinity 1.2B (v0.5)](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5)
## Example
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper",
revision="punct_wrapper-related_words-overfit", # or punct_wrapper-related_words-minevalloss
bos_token="<s>",
eos_token="</s>",
unk_token="<unk>",
pad_token="<pad>",
mask_token="<mask>",
)
model = AutoModelForCausalLM.from_pretrained(
"CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper",
revision="punct_wrapper-related_words-overfit", # or punct_wrapper-related_words-minevalloss
pad_token_id=tokenizer.eos_token_id,
).to(device="cuda")
model.eval()
prompt = "석양이 보이는 경치"
wrapped_prompt = f"@{prompt}@<usr>\n"
with torch.no_grad():
tokens = tokenizer.encode(wrapped_prompt, return_tensors="pt").to(device="cuda")
gen_tokens = model.generate(
tokens,
max_length=64,
repetition_penalty=2.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
top_k=16,
top_p=0.8,
)
generated = tokenizer.decode(gen_tokens[0][len(tokens[0]):])
print(generated)
# 해가 지고 있을 무렵
# 나는 석양을 보러 간다
# 붉은 하늘과 하얀 구름이 나를 반겨줄 것 같아서리
# 하지만 내가 본 해는 저물어만 가고
# 구름마저 자취를 감춘 어둠만이 남아있을 뿐이네
# 내가 탄 배는 보이지도 않고
``` | {"language": ["ko"], "license": "cc-by-nc-sa-4.0", "tags": ["gpt2"]} | CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper | null | [
"gpt2",
"ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ko"
] | TAGS
#gpt2 #ko #license-cc-by-nc-sa-4.0 #region-us
|
## Model based on
Ko-GPT-Trinity 1.2B (v0.5)
## Example
| [
"## Model based on\nKo-GPT-Trinity 1.2B (v0.5)",
"## Example"
] | [
"TAGS\n#gpt2 #ko #license-cc-by-nc-sa-4.0 #region-us \n",
"## Model based on\nKo-GPT-Trinity 1.2B (v0.5)",
"## Example"
] | [
25,
21,
3
] | [
"TAGS\n#gpt2 #ko #license-cc-by-nc-sa-4.0 #region-us \n## Model based on\nKo-GPT-Trinity 1.2B (v0.5)## Example"
] |
question-answering | transformers | This question answering model was fine tuned to detect negation expressions
How to use:
question: negation
context: That is not safe!
Answer: not
question: negation
context: Weren't we going to go to the moon?
Answer: Weren't
| {} | Ching/negation_detector | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us
| This question answering model was fine tuned to detect negation expressions
How to use:
question: negation
context: That is not safe!
Answer: not
question: negation
context: Weren't we going to go to the moon?
Answer: Weren't
| [] | [
"TAGS\n#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us \n"
] | [
23
] | [
"TAGS\n#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
Donald Trump DialoGPT Model built by following tutorial by [Ruolin Zheng](https://youtu.be/Rk8eM1p_xgM).
The data used for training was 2020 presidential debate.
More work is needed to optimize it. I don't have access to larger VRAM. | {"tags": ["conversational"]} | Chiuchiyin/DialoGPT-small-Donald | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Donald Trump DialoGPT Model built by following tutorial by Ruolin Zheng.
The data used for training was 2020 presidential debate.
More work is needed to optimize it. I don't have access to larger VRAM. | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | # CMJS DialoGPT Model | {"tags": ["conversational"]} | ChrisVCB/DialoGPT-medium-cmjs | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # CMJS DialoGPT Model | [
"# CMJS DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CMJS DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# CMJS DialoGPT Model"
] |
text-generation | transformers | # Eddie Jones DialoGPT Model | {"tags": ["conversational"]} | ChrisVCB/DialoGPT-medium-ej | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Eddie Jones DialoGPT Model | [
"# Eddie Jones DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Eddie Jones DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Eddie Jones DialoGPT Model"
] |
depth-estimation | null |
# MADNet Keras
MADNet is a deep stereo depth estimation model. Its key defining features are:
1. It has a light-weight architecture which means it has low latency.
2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data.
3. It's a stereo depth model, which means it's capable of high accuracy.
The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features:
1. Good optimization.
2. High level Keras methods (.fit, .predict and .evaluate).
3. Little boilerplate code.
4. Decent support from external packages (like Weights and Biases).
5. Callbacks.
The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets).
**Abstract**:
Deep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.
## Usage Instructions
See the accompanying codes readme for details on how to perform training and inferencing with the model: [madnet-deep-stereo-with-keras](https://github.com/ChristianOrr/madnet-deep-stereo-with-keras).
## Training
### TF1 Kitti and TF1 Synthetic
Training details for the TF1 weights are available in the supplementary material (at the end) of this paper: [Real-time self-adaptive deep stereo](https://arxiv.org/abs/1810.05424)
### Synthetic
The synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters:
- Steps: 1.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
### Kitti
The kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters:
- Steps: 0.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.0000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
## BibTeX entry and citation info
```bibtex
@InProceedings{Tonioni_2019_CVPR,
author = {Tonioni, Alessio and Tosi, Fabio and Poggi, Matteo and Mattoccia, Stefano and Di Stefano, Luigi},
title = {Real-time self-adaptive deep stereo},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
```
```bibtex
@article{Poggi2021continual,
author={Poggi, Matteo and Tonioni, Alessio and Tosi, Fabio
and Mattoccia, Stefano and Di Stefano, Luigi},
title={Continual Adaptation for Deep Stereo},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2021}
}
```
```bibtex
@InProceedings{MIFDB16,
author = "N. Mayer and E. Ilg and P. Hausser and P. Fischer and D. Cremers and A. Dosovitskiy and T. Brox",
title = "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation",
booktitle = "IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)",
year = "2016",
note = "arXiv:1512.02134",
url = "http://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16"
}
```
```bibtex
@INPROCEEDINGS{Geiger2012CVPR,
author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2012}
}
```
```bibtex
@INPROCEEDINGS{Menze2015CVPR,
author = {Moritz Menze and Andreas Geiger},
title = {Object Scene Flow for Autonomous Vehicles},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2015}
}
``` | {"license": "apache-2.0", "tags": ["vision", "deep-stereo", "depth-estimation", "Tensorflow2", "Keras"], "datasets": ["flyingthings-3d", "kitti"]} | ChristianOrr/madnet_keras | null | [
"tensorboard",
"vision",
"deep-stereo",
"depth-estimation",
"Tensorflow2",
"Keras",
"dataset:flyingthings-3d",
"dataset:kitti",
"arxiv:1810.05424",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.05424"
] | [] | TAGS
#tensorboard #vision #deep-stereo #depth-estimation #Tensorflow2 #Keras #dataset-flyingthings-3d #dataset-kitti #arxiv-1810.05424 #license-apache-2.0 #region-us
|
# MADNet Keras
MADNet is a deep stereo depth estimation model. Its key defining features are:
1. It has a light-weight architecture which means it has low latency.
2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data.
3. It's a stereo depth model, which means it's capable of high accuracy.
The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features:
1. Good optimization.
2. High level Keras methods (.fit, .predict and .evaluate).
3. Little boilerplate code.
4. Decent support from external packages (like Weights and Biases).
5. Callbacks.
The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets).
Abstract:
Deep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.
## Usage Instructions
See the accompanying codes readme for details on how to perform training and inferencing with the model: madnet-deep-stereo-with-keras.
## Training
### TF1 Kitti and TF1 Synthetic
Training details for the TF1 weights are available in the supplementary material (at the end) of this paper: Real-time self-adaptive deep stereo
### Synthetic
The synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters:
- Steps: 1.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
### Kitti
The kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters:
- Steps: 0.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.0000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
## BibTeX entry and citation info
| [
"# MADNet Keras\r\n\r\nMADNet is a deep stereo depth estimation model. Its key defining features are:\r\n 1. It has a light-weight architecture which means it has low latency.\r\n 2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data. \r\n 3. It's a stereo depth model, which means it's capable of high accuracy.\r\n \r\n The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features:\r\n 1. Good optimization. \r\n 2. High level Keras methods (.fit, .predict and .evaluate).\r\n 3. Little boilerplate code.\r\n 4. Decent support from external packages (like Weights and Biases). \r\n 5. Callbacks.\r\n \r\n The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets).\r\n\r\nAbstract:\r\n\r\nDeep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.",
"## Usage Instructions\r\nSee the accompanying codes readme for details on how to perform training and inferencing with the model: madnet-deep-stereo-with-keras.",
"## Training",
"### TF1 Kitti and TF1 Synthetic\r\nTraining details for the TF1 weights are available in the supplementary material (at the end) of this paper: Real-time self-adaptive deep stereo",
"### Synthetic\r\nThe synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters:\r\n- Steps: 1.5 million\r\n- Learning Rate: 0.0001\r\n- Decay Rate: 0.999\r\n- Minimum Learning Rate Cap: 0.000001\r\n- Batch Size: 1\r\n- Optimizer: Adam\r\n- Image Height: 480\r\n- Image Width: 640",
"### Kitti\r\nThe kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters:\r\n- Steps: 0.5 million\r\n- Learning Rate: 0.0001\r\n- Decay Rate: 0.999\r\n- Minimum Learning Rate Cap: 0.0000001\r\n- Batch Size: 1\r\n- Optimizer: Adam\r\n- Image Height: 480\r\n- Image Width: 640",
"## BibTeX entry and citation info"
] | [
"TAGS\n#tensorboard #vision #deep-stereo #depth-estimation #Tensorflow2 #Keras #dataset-flyingthings-3d #dataset-kitti #arxiv-1810.05424 #license-apache-2.0 #region-us \n",
"# MADNet Keras\r\n\r\nMADNet is a deep stereo depth estimation model. Its key defining features are:\r\n 1. It has a light-weight architecture which means it has low latency.\r\n 2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data. \r\n 3. It's a stereo depth model, which means it's capable of high accuracy.\r\n \r\n The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features:\r\n 1. Good optimization. \r\n 2. High level Keras methods (.fit, .predict and .evaluate).\r\n 3. Little boilerplate code.\r\n 4. Decent support from external packages (like Weights and Biases). \r\n 5. Callbacks.\r\n \r\n The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets).\r\n\r\nAbstract:\r\n\r\nDeep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.",
"## Usage Instructions\r\nSee the accompanying codes readme for details on how to perform training and inferencing with the model: madnet-deep-stereo-with-keras.",
"## Training",
"### TF1 Kitti and TF1 Synthetic\r\nTraining details for the TF1 weights are available in the supplementary material (at the end) of this paper: Real-time self-adaptive deep stereo",
"### Synthetic\r\nThe synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters:\r\n- Steps: 1.5 million\r\n- Learning Rate: 0.0001\r\n- Decay Rate: 0.999\r\n- Minimum Learning Rate Cap: 0.000001\r\n- Batch Size: 1\r\n- Optimizer: Adam\r\n- Image Height: 480\r\n- Image Width: 640",
"### Kitti\r\nThe kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters:\r\n- Steps: 0.5 million\r\n- Learning Rate: 0.0001\r\n- Decay Rate: 0.999\r\n- Minimum Learning Rate Cap: 0.0000001\r\n- Batch Size: 1\r\n- Optimizer: Adam\r\n- Image Height: 480\r\n- Image Width: 640",
"## BibTeX entry and citation info"
] | [
58,
517,
38,
3,
44,
90,
101,
9
] | [
"TAGS\n#tensorboard #vision #deep-stereo #depth-estimation #Tensorflow2 #Keras #dataset-flyingthings-3d #dataset-kitti #arxiv-1810.05424 #license-apache-2.0 #region-us \n# MADNet Keras\r\n\r\nMADNet is a deep stereo depth estimation model. Its key defining features are:\r\n 1. It has a light-weight architecture which means it has low latency.\r\n 2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data. \r\n 3. It's a stereo depth model, which means it's capable of high accuracy.\r\n \r\n The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features:\r\n 1. Good optimization. \r\n 2. High level Keras methods (.fit, .predict and .evaluate).\r\n 3. Little boilerplate code.\r\n 4. Decent support from external packages (like Weights and Biases). \r\n 5. Callbacks.\r\n \r\n The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets).\r\n\r\nAbstract:\r\n\r\nDeep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.## Usage Instructions\r\nSee the accompanying codes readme for details on how to perform training and inferencing with the model: madnet-deep-stereo-with-keras.## Training### TF1 Kitti and TF1 Synthetic\r\nTraining details for the TF1 weights are available in the supplementary material (at the end) of this paper: Real-time self-adaptive deep stereo### Synthetic\r\nThe synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters:\r\n- Steps: 1.5 million\r\n- Learning Rate: 0.0001\r\n- Decay Rate: 0.999\r\n- Minimum Learning Rate Cap: 0.000001\r\n- Batch Size: 1\r\n- Optimizer: Adam\r\n- Image Height: 480\r\n- Image Width: 640### Kitti\r\nThe kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters:\r\n- Steps: 0.5 million\r\n- Learning Rate: 0.0001\r\n- Decay Rate: 0.999\r\n- Minimum Learning Rate Cap: 0.0000001\r\n- Batch Size: 1\r\n- Optimizer: Adam\r\n- Image Height: 480\r\n- Image Width: 640## BibTeX entry and citation info"
] |
null | transformers | # IndoBERT (Indonesian BERT Model)
## Model description
ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words).
IndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language.
This model is base version which use electra-base config.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ChristopherA08/IndoELECTRA")
model = AutoModel.from_pretrained("ChristopherA08/IndoELECTRA")
tokenizer.encode("hai aku mau makan.")
[2, 8078, 1785, 2318, 1946, 18, 4]
```
## Training procedure
The training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
| {"language": "id", "datasets": ["oscar"]} | ChristopherA08/IndoELECTRA | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"id",
"dataset:oscar",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"id"
] | TAGS
#transformers #pytorch #electra #pretraining #id #dataset-oscar #endpoints_compatible #region-us
| # IndoBERT (Indonesian BERT Model)
## Model description
ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words).
IndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language.
This model is base version which use electra-base config.
## Intended uses & limitations
#### How to use
## Training procedure
The training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
| [
"# IndoBERT (Indonesian BERT Model)",
"## Model description\nELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words).\nIndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language. \n\nThis model is base version which use electra-base config.",
"## Intended uses & limitations",
"#### How to use",
"## Training procedure\n\nThe training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2.\nWe used a Google Cloud Storage bucket, for persistent storage of training data and models."
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #id #dataset-oscar #endpoints_compatible #region-us \n",
"# IndoBERT (Indonesian BERT Model)",
"## Model description\nELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words).\nIndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language. \n\nThis model is base version which use electra-base config.",
"## Intended uses & limitations",
"#### How to use",
"## Training procedure\n\nThe training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2.\nWe used a Google Cloud Storage bucket, for persistent storage of training data and models."
] | [
31,
8,
95,
6,
7,
47
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #id #dataset-oscar #endpoints_compatible #region-us \n# IndoBERT (Indonesian BERT Model)## Model description\nELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words).\nIndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language. \n\nThis model is base version which use electra-base config.## Intended uses & limitations#### How to use## Training procedure\n\nThe training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2.\nWe used a Google Cloud Storage bucket, for persistent storage of training data and models."
] |
text-generation | transformers |
# Harry Potter DialoGPT MOdel | {"tags": ["conversational"]} | Chuah/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT MOdel | [
"# Harry Potter DialoGPT MOdel"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT MOdel"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT MOdel"
] |
text-generation | transformers |
# Dr. Fauci DialoGPT Model | {"tags": ["conversational"]} | ChukSamuels/DialoGPT-small-Dr.FauciBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Dr. Fauci DialoGPT Model | [
"# Dr. Fauci DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Dr. Fauci DialoGPT Model"
] | [
39,
10
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Dr. Fauci DialoGPT Model"
] |
null | null | copied from boris | {} | Cilan/dalle-knockoff | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| copied from boris | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
null | transformers |
## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately.
## How to use the discriminator in `transformers`
```
from transformers import BertJapaneseTokenizer, ElectraForPreTraining
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-discriminator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForPreTraining.from_pretrained('Cinnamon/electra-small-japanese-discriminator')
```
| {"language": "ja", "license": "apache-2.0"} | Cinnamon/electra-small-japanese-discriminator | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #electra #pretraining #ja #license-apache-2.0 #endpoints_compatible #region-us
|
## Japanese ELECTRA-small
We provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
Our pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately.
## How to use the discriminator in 'transformers'
| [
"## Japanese ELECTRA-small\n\nWe provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nOur pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately.",
"## How to use the discriminator in 'transformers'"
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #ja #license-apache-2.0 #endpoints_compatible #region-us \n",
"## Japanese ELECTRA-small\n\nWe provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nOur pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately.",
"## How to use the discriminator in 'transformers'"
] | [
34,
96,
13
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #ja #license-apache-2.0 #endpoints_compatible #region-us \n## Japanese ELECTRA-small\n\nWe provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nOur pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately.## How to use the discriminator in 'transformers'"
] |
fill-mask | transformers | ## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately.
```
# ELECTRA-small generator usage
from transformers import BertJapaneseTokenizer, ElectraForMaskedLM
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-generator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForMaskedLM.from_pretrained('Cinnamon/electra-small-japanese-generator')
```
| {"language": "ja"} | Cinnamon/electra-small-japanese-generator | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"ja",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #electra #fill-mask #ja #autotrain_compatible #endpoints_compatible #region-us
| ## Japanese ELECTRA-small
We provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
Our pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately.
| [
"## Japanese ELECTRA-small\n\nWe provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nOur pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately."
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #ja #autotrain_compatible #endpoints_compatible #region-us \n",
"## Japanese ELECTRA-small\n\nWe provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nOur pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately."
] | [
31,
96
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #ja #autotrain_compatible #endpoints_compatible #region-us \n## Japanese ELECTRA-small\n\nWe provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nOur pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately."
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Ciruzzo/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
text-generation | transformers | # RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
| {"tags": ["conversational"]} | ClaudeCOULOMBE/RickBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| # RickBot built for Chai
Make your own here
| [
"# RickBot built for Chai\nMake your own here"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# RickBot built for Chai\nMake your own here"
] | [
43,
11
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# RickBot built for Chai\nMake your own here"
] |
zero-shot-classification | transformers | ETH Zeroshot | {"datasets": ["multi_nli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "ETH", "candidate_labels": "Location & Address, Employment, Organizational, Name, Service, Studies, Science", "hypothesis_template": "This is {}."}]} | ClaudeYang/awesome_fb_model | null | [
"transformers",
"pytorch",
"bart",
"text-classification",
"zero-shot-classification",
"dataset:multi_nli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text-classification #zero-shot-classification #dataset-multi_nli #autotrain_compatible #endpoints_compatible #region-us
| ETH Zeroshot | [] | [
"TAGS\n#transformers #pytorch #bart #text-classification #zero-shot-classification #dataset-multi_nli #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
42
] | [
"TAGS\n#transformers #pytorch #bart #text-classification #zero-shot-classification #dataset-multi_nli #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | null |
# My Awesome Model
| {"tags": ["conversational"]} | ClydeWasTaken/DialoGPT-small-joshua | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
|
# My Awesome Model
| [
"# My Awesome Model"
] | [
"TAGS\n#conversational #region-us \n",
"# My Awesome Model"
] | [
8,
4
] | [
"TAGS\n#conversational #region-us \n# My Awesome Model"
] |
text-generation | transformers |
# Cartman DialoGPT Model | {"tags": ["conversational"]} | CodeDanCode/CartmenBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Cartman DialoGPT Model | [
"# Cartman DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Cartman DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Cartman DialoGPT Model"
] |
text-generation | transformers |
# SouthPark Kyle Bot
| {"tags": ["conversational"]} | CodeDanCode/SP-KyleBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SouthPark Kyle Bot
| [
"# SouthPark Kyle Bot"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SouthPark Kyle Bot"
] | [
39,
5
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# SouthPark Kyle Bot"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | CoderBoy432/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
text-generation | transformers |
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("MarxBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | {"tags": ["conversational"]} | CoderEFE/DialoGPT-marxbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Chat with the model:
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] | [
43
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-classification | transformers |
# bart-faithful-summary-detector
## Model description
A BART (base) model trained to classify whether a summary is *faithful* to the original article. See our [paper in NAACL'21](https://www.seas.upenn.edu/~sihaoc/static/pdf/CZSR21.pdf) for details.
## Usage
Concatenate a summary and a source document as input (note that the summary needs to be the **first** sentence).
Here's an example usage (with PyTorch)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CogComp/bart-faithful-summary-detector")
model = AutoModelForSequenceClassification.from_pretrained("CogComp/bart-faithful-summary-detector")
article = "Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
bad_summary = "Ban Ki-moon was elected for a second term in 2007."
good_summary = "Ban Ki-moon was elected for a second term in 2011."
bad_pair = tokenizer(text=bad_summary, text_pair=article, return_tensors='pt')
good_pair = tokenizer(text=good_summary, text_pair=article, return_tensors='pt')
bad_score = model(**bad_pair)
good_score = model(**good_pair)
print(good_score[0][:, 1] > bad_score[0][:, 1]) # True, label mapping: "0" -> "Hallucinated" "1" -> "Faithful"
```
### BibTeX entry and citation info
```bibtex
@inproceedings{CZSR21,
author = {Sihao Chen and Fan Zhang and Kazoo Sone and Dan Roth},
title = {{Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection}},
booktitle = {NAACL},
year = {2021}
}
``` | {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["text-classification", "bart", "xsum"], "datasets": ["xsum"], "thumbnail": "https://cogcomp.seas.upenn.edu/images/logo.png", "widget": [{"text": "<s> Ban Ki-moon was elected for a second term in 2007. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."}, {"text": "<s> Ban Ki-moon was elected for a second term in 2011. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."}]} | CogComp/bart-faithful-summary-detector | null | [
"transformers",
"pytorch",
"jax",
"bart",
"text-classification",
"xsum",
"en",
"dataset:xsum",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #bart #text-classification #xsum #en #dataset-xsum #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# bart-faithful-summary-detector
## Model description
A BART (base) model trained to classify whether a summary is *faithful* to the original article. See our paper in NAACL'21 for details.
## Usage
Concatenate a summary and a source document as input (note that the summary needs to be the first sentence).
Here's an example usage (with PyTorch)
### BibTeX entry and citation info
| [
"# bart-faithful-summary-detector",
"## Model description\n\nA BART (base) model trained to classify whether a summary is *faithful* to the original article. See our paper in NAACL'21 for details.",
"## Usage\nConcatenate a summary and a source document as input (note that the summary needs to be the first sentence). \n\nHere's an example usage (with PyTorch)",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #jax #bart #text-classification #xsum #en #dataset-xsum #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# bart-faithful-summary-detector",
"## Model description\n\nA BART (base) model trained to classify whether a summary is *faithful* to the original article. See our paper in NAACL'21 for details.",
"## Usage\nConcatenate a summary and a source document as input (note that the summary needs to be the first sentence). \n\nHere's an example usage (with PyTorch)",
"### BibTeX entry and citation info"
] | [
57,
8,
37,
40,
10
] | [
"TAGS\n#transformers #pytorch #jax #bart #text-classification #xsum #en #dataset-xsum #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# bart-faithful-summary-detector## Model description\n\nA BART (base) model trained to classify whether a summary is *faithful* to the original article. See our paper in NAACL'21 for details.## Usage\nConcatenate a summary and a source document as input (note that the summary needs to be the first sentence). \n\nHere's an example usage (with PyTorch)### BibTeX entry and citation info"
] |
fill-mask | transformers | # roberta-temporal-predictor
A RoBERTa-base model that is fine-tuned on the [The New York Times Annotated Corpus](https://catalog.ldc.upenn.edu/LDC2008T19)
to predict temporal precedence of two events. This is used as the ``temporality prediction'' component
in our ROCK framework for reasoning about commonsense causality. See our [paper](https://arxiv.org/abs/2202.00436) for more details.
# Usage
You can directly use this model for filling-mask tasks, as shown in the example widget.
However, for better temporal inference, it is recommended to symmetrize the outputs as
$$
P(E_1 \prec E_2) = \frac{1}{2} (f(E_1,E_2) + f(E_2,E_1))
$$
where ``f(E_1,E_2)`` denotes the predicted probability for ``E_1`` to occur preceding ``E_2``.
For simplicity, we implement the following TempPredictor class that incorporate this symmetrization automatically.
Below is an example usage for the ``TempPredictor`` class:
```python
from transformers import (RobertaForMaskedLM, RobertaTokenizer)
from src.temp_predictor import TempPredictor
TORCH_DEV = "cuda:0" # change as needed
tp_roberta_ft = src.TempPredictor(
model=RobertaForMaskedLM.from_pretrained("CogComp/roberta-temporal-predictor"),
tokenizer=RobertaTokenizer.from_pretrained("CogComp/roberta-temporal-predictor"),
device=TORCH_DEV
)
E1 = "The man turned on the faucet."
E2 = "Water flows out."
t12 = tp_roberta_ft(E1, E2, top_k=5)
print(f"P('{E1}' before '{E2}'): {t12}")
```
# BibTeX entry and citation info
```bib
@misc{zhang2022causal,
title={Causal Inference Principles for Reasoning about Commonsense Causality},
author={Jiayao Zhang and Hongming Zhang and Dan Roth and Weijie J. Su},
year={2022},
eprint={2202.00436},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"license": "mit", "widget": [{"text": "The man turned on the faucet <mask> water flows out."}, {"text": "The woman received her pension <mask> she retired."}]} | CogComp/roberta-temporal-predictor | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.00436",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.00436"
] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #arxiv-2202.00436 #license-mit #autotrain_compatible #endpoints_compatible #region-us
| # roberta-temporal-predictor
A RoBERTa-base model that is fine-tuned on the The New York Times Annotated Corpus
to predict temporal precedence of two events. This is used as the ''temporality prediction'' component
in our ROCK framework for reasoning about commonsense causality. See our paper for more details.
# Usage
You can directly use this model for filling-mask tasks, as shown in the example widget.
However, for better temporal inference, it is recommended to symmetrize the outputs as
$$
P(E_1 \prec E_2) = \frac{1}{2} (f(E_1,E_2) + f(E_2,E_1))
$$
where ''f(E_1,E_2)'' denotes the predicted probability for ''E_1'' to occur preceding ''E_2''.
For simplicity, we implement the following TempPredictor class that incorporate this symmetrization automatically.
Below is an example usage for the ''TempPredictor'' class:
# BibTeX entry and citation info
| [
"# roberta-temporal-predictor\r\nA RoBERTa-base model that is fine-tuned on the The New York Times Annotated Corpus\r\nto predict temporal precedence of two events. This is used as the ''temporality prediction'' component\r\nin our ROCK framework for reasoning about commonsense causality. See our paper for more details.",
"# Usage\r\n\r\nYou can directly use this model for filling-mask tasks, as shown in the example widget.\r\nHowever, for better temporal inference, it is recommended to symmetrize the outputs as\r\n$$\r\nP(E_1 \\prec E_2) = \\frac{1}{2} (f(E_1,E_2) + f(E_2,E_1))\r\n$$\r\nwhere ''f(E_1,E_2)'' denotes the predicted probability for ''E_1'' to occur preceding ''E_2''.\r\nFor simplicity, we implement the following TempPredictor class that incorporate this symmetrization automatically.\r\nBelow is an example usage for the ''TempPredictor'' class:",
"# BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #arxiv-2202.00436 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-temporal-predictor\r\nA RoBERTa-base model that is fine-tuned on the The New York Times Annotated Corpus\r\nto predict temporal precedence of two events. This is used as the ''temporality prediction'' component\r\nin our ROCK framework for reasoning about commonsense causality. See our paper for more details.",
"# Usage\r\n\r\nYou can directly use this model for filling-mask tasks, as shown in the example widget.\r\nHowever, for better temporal inference, it is recommended to symmetrize the outputs as\r\n$$\r\nP(E_1 \\prec E_2) = \\frac{1}{2} (f(E_1,E_2) + f(E_2,E_1))\r\n$$\r\nwhere ''f(E_1,E_2)'' denotes the predicted probability for ''E_1'' to occur preceding ''E_2''.\r\nFor simplicity, we implement the following TempPredictor class that incorporate this symmetrization automatically.\r\nBelow is an example usage for the ''TempPredictor'' class:",
"# BibTeX entry and citation info"
] | [
43,
67,
167,
8
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #arxiv-2202.00436 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# roberta-temporal-predictor\r\nA RoBERTa-base model that is fine-tuned on the The New York Times Annotated Corpus\r\nto predict temporal precedence of two events. This is used as the ''temporality prediction'' component\r\nin our ROCK framework for reasoning about commonsense causality. See our paper for more details.# Usage\r\n\r\nYou can directly use this model for filling-mask tasks, as shown in the example widget.\r\nHowever, for better temporal inference, it is recommended to symmetrize the outputs as\r\n$$\r\nP(E_1 \\prec E_2) = \\frac{1}{2} (f(E_1,E_2) + f(E_2,E_1))\r\n$$\r\nwhere ''f(E_1,E_2)'' denotes the predicted probability for ''E_1'' to occur preceding ''E_2''.\r\nFor simplicity, we implement the following TempPredictor class that incorporate this symmetrization automatically.\r\nBelow is an example usage for the ''TempPredictor'' class:# BibTeX entry and citation info"
] |
feature-extraction | transformers | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2-medium)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다.
| {} | ComCom/gpt2-large | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
| 해당 모델은 해당 사이트에서 가져온 모델입니다.
해당 모델은 Teachable NLP 서비스에서 사용됩니다.
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n"
] | [
31
] | [
"TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction | transformers | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2-medium)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다.
| {} | ComCom/gpt2-medium | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
| 해당 모델은 해당 사이트에서 가져온 모델입니다.
해당 모델은 Teachable NLP 서비스에서 사용됩니다.
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n"
] | [
31
] | [
"TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction | transformers | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다. | {} | ComCom/gpt2 | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
| 해당 모델은 해당 사이트에서 가져온 모델입니다.
해당 모델은 Teachable NLP 서비스에서 사용됩니다. | [] | [
"TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n"
] | [
31
] | [
"TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# neurotitle-rugpt3-small
Model based on [ruGPT-3](https://huggingface.co/sberbank-ai) for generating scientific paper titles.
Trained on [All NeurIPS (NIPS) Papers](https://www.kaggle.com/rowhitswami/nips-papers-1987-2019-updated) dataset.
Use exclusively as a crazier alternative to SCIgen.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
```shell
$ cometrain create --name neurotitle --model auto --task task_0x2231.txt --output transformers
```
## Use with Transformers
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model="CometrainResearch/neurotitle-rugpt3-small")
generator("BERT:", max_length=50)
```
| {"language": ["ru", "en"], "license": "mit", "tags": ["Cometrain AutoCode", "Cometrain AlphaML"], "datasets": ["All-NeurIPS-Papers-Scraper"], "widget": [{"text": "NIPSE:", "example_title": "NIPS"}, {"text": "Learning CNN", "example_title": "Learning CNN"}, {"text": "ONNX:", "example_title": "ONNX"}, {"text": "BERT:", "example_title": "BERT"}], "inference": {"parameters": {"temperature": 0.9}}} | cometrain/neurotitle-rugpt3-small | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Cometrain AutoCode",
"Cometrain AlphaML",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"en"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #Cometrain AutoCode #Cometrain AlphaML #ru #en #dataset-All-NeurIPS-Papers-Scraper #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# neurotitle-rugpt3-small
Model based on ruGPT-3 for generating scientific paper titles.
Trained on All NeurIPS (NIPS) Papers dataset.
Use exclusively as a crazier alternative to SCIgen.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
## Use with Transformers
| [
"# neurotitle-rugpt3-small\nModel based on ruGPT-3 for generating scientific paper titles.\nTrained on All NeurIPS (NIPS) Papers dataset.\nUse exclusively as a crazier alternative to SCIgen.",
"## Made with Cometrain AlphaML & AutoCode\nThis model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode",
"## Cometrain AlphaML command",
"## Use with Transformers"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #Cometrain AutoCode #Cometrain AlphaML #ru #en #dataset-All-NeurIPS-Papers-Scraper #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# neurotitle-rugpt3-small\nModel based on ruGPT-3 for generating scientific paper titles.\nTrained on All NeurIPS (NIPS) Papers dataset.\nUse exclusively as a crazier alternative to SCIgen.",
"## Made with Cometrain AlphaML & AutoCode\nThis model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode",
"## Cometrain AlphaML command",
"## Use with Transformers"
] | [
68,
50,
38,
7,
5
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #Cometrain AutoCode #Cometrain AlphaML #ru #en #dataset-All-NeurIPS-Papers-Scraper #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# neurotitle-rugpt3-small\nModel based on ruGPT-3 for generating scientific paper titles.\nTrained on All NeurIPS (NIPS) Papers dataset.\nUse exclusively as a crazier alternative to SCIgen.## Made with Cometrain AlphaML & AutoCode\nThis model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode## Cometrain AlphaML command## Use with Transformers"
] |
text-generation | transformers |
# Rick DialoGPT Model | {"tags": ["conversational"]} | Connor/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model | [
"# Rick DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] | [
39,
6
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick DialoGPT Model"
] |
text-generation | transformers |
#enlightened GPT model | {"tags": ["conversational"]} | Connorvr/BrightBot-small | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#enlightened GPT model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "model", "results": []}]} | Connorvr/TeachingGen | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# model
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
"# model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.18.0.dev0\n- Pytorch 1.6.0\n- Datasets 2.0.0\n- Tokenizers 0.11.6"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.18.0.dev0\n- Pytorch 1.6.0\n- Datasets 2.0.0\n- Tokenizers 0.11.6"
] | [
46,
20,
7,
9,
9,
4,
95,
5,
43
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.18.0.dev0\n- Pytorch 1.6.0\n- Datasets 2.0.0\n- Tokenizers 0.11.6"
] |
null | null |
@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@article{wan2020old,
title={Old Photo Restoration via Deep Latent Space Translation},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
journal={arXiv preprint arXiv:2009.07047},
year={2020}
}
| {"language": ["en"], "license": "mit", "tags": ["image_restoration", "superresolution"], "thumbnail": "https://github.com/Nick-Harvey/for_my_abuela/blob/master/cuban_large.jpg"} | Coolhand/Abuela | null | [
"image_restoration",
"superresolution",
"en",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#image_restoration #superresolution #en #license-mit #region-us
|
@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@article{wan2020old,
title={Old Photo Restoration via Deep Latent Space Translation},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
journal={arXiv preprint arXiv:2009.07047},
year={2020}
}
| [] | [
"TAGS\n#image_restoration #superresolution #en #license-mit #region-us \n"
] | [
20
] | [
"TAGS\n#image_restoration #superresolution #en #license-mit #region-us \n"
] |
text-generation | transformers |
# Atakan DialoGPT Model | {"tags": ["conversational"]} | CopymySkill/DialoGPT-medium-atakan | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Atakan DialoGPT Model | [
"# Atakan DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Atakan DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Atakan DialoGPT Model"
] |
text-generation | transformers |
#DiabloGPT Captain Price (Extended) | {"tags": ["conversational"]} | Corvus/DialoGPT-medium-CaptainPrice-Extended | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#DiabloGPT Captain Price (Extended) | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Captain Price DialoGPT Model | {"tags": ["conversational"]} | Corvus/DialoGPT-medium-CaptainPrice | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Captain Price DialoGPT Model | [
"# Captain Price DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Captain Price DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Captain Price DialoGPT Model"
] |
text-classification | transformers |
### Description
A Multi-label text classification model trained on a customer feedback data using DistilBert.
Possible labels are:
- Delivery (delivery status, time of arrival, etc.)
- Return (return confirmation, return label requests, etc.)
- Product (quality, complaint, etc.)
- Monetary (pending transactions, refund, etc.)
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_mlc_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_mlc_v7_distil")
``` | {"language": "en", "license": "mit", "tags": ["multi-label"], "widget": [{"text": "I would like to return these pants and shoes"}]} | CouchCat/ma_mlc_v7_distil | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"multi-label",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #distilbert #text-classification #multi-label #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### Description
A Multi-label text classification model trained on a customer feedback data using DistilBert.
Possible labels are:
- Delivery (delivery status, time of arrival, etc.)
- Return (return confirmation, return label requests, etc.)
- Product (quality, complaint, etc.)
- Monetary (pending transactions, refund, etc.)
### Usage
| [
"### Description\nA Multi-label text classification model trained on a customer feedback data using DistilBert.\nPossible labels are:\n- Delivery (delivery status, time of arrival, etc.)\n- Return (return confirmation, return label requests, etc.)\n- Product (quality, complaint, etc.)\n- Monetary (pending transactions, refund, etc.)",
"### Usage"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #multi-label #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Description\nA Multi-label text classification model trained on a customer feedback data using DistilBert.\nPossible labels are:\n- Delivery (delivery status, time of arrival, etc.)\n- Return (return confirmation, return label requests, etc.)\n- Product (quality, complaint, etc.)\n- Monetary (pending transactions, refund, etc.)",
"### Usage"
] | [
40,
74,
4
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #multi-label #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Description\nA Multi-label text classification model trained on a customer feedback data using DistilBert.\nPossible labels are:\n- Delivery (delivery status, time of arrival, etc.)\n- Return (return confirmation, return label requests, etc.)\n- Product (quality, complaint, etc.)\n- Monetary (pending transactions, refund, etc.)### Usage"
] |
token-classification | transformers |
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are:
- PRD: for certain products
- BRND: for brands
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v6_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v6_distil")
``` | {"language": "en", "license": "mit", "tags": ["ner"], "widget": [{"text": "These shoes from Adidas fit quite well"}]} | CouchCat/ma_ner_v6_distil | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"ner",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #distilbert #token-classification #ner #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are:
- PRD: for certain products
- BRND: for brands
### Usage
| [
"### Description\nA Named Entity Recognition model trained on a customer feedback data using DistilBert.\nPossible labels are:\n- PRD: for certain products\n- BRND: for brands",
"### Usage"
] | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #ner #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Description\nA Named Entity Recognition model trained on a customer feedback data using DistilBert.\nPossible labels are:\n- PRD: for certain products\n- BRND: for brands",
"### Usage"
] | [
39,
37,
4
] | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #ner #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Description\nA Named Entity Recognition model trained on a customer feedback data using DistilBert.\nPossible labels are:\n- PRD: for certain products\n- BRND: for brands### Usage"
] |
token-classification | transformers |
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:
- PROD: for certain products
- BRND: for brands
- PERS: people names
The following tags are simply in place to help better categorize the previous tags
- MATR: relating to materials, e.g. cloth, leather, seam, etc.
- TIME: time related entities
- MISC: any other entity that might skew the results
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v7_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v7_distil")
```
| {"language": "en", "license": "mit", "tags": ["ner"], "widget": [{"text": "These shoes I recently bought from Tommy Hilfiger fit quite well. The shirt, however, has got a hole"}]} | CouchCat/ma_ner_v7_distil | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"ner",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #distilbert #token-classification #ner #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:
- PROD: for certain products
- BRND: for brands
- PERS: people names
The following tags are simply in place to help better categorize the previous tags
- MATR: relating to materials, e.g. cloth, leather, seam, etc.
- TIME: time related entities
- MISC: any other entity that might skew the results
### Usage
| [
"### Description\n\nA Named Entity Recognition model trained on a customer feedback data using DistilBert.\nPossible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:\n\n- PROD: for certain products\n- BRND: for brands\n- PERS: people names\n\nThe following tags are simply in place to help better categorize the previous tags\n\n- MATR: relating to materials, e.g. cloth, leather, seam, etc.\n- TIME: time related entities\n- MISC: any other entity that might skew the results",
"### Usage"
] | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #ner #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Description\n\nA Named Entity Recognition model trained on a customer feedback data using DistilBert.\nPossible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:\n\n- PROD: for certain products\n- BRND: for brands\n- PERS: people names\n\nThe following tags are simply in place to help better categorize the previous tags\n\n- MATR: relating to materials, e.g. cloth, leather, seam, etc.\n- TIME: time related entities\n- MISC: any other entity that might skew the results",
"### Usage"
] | [
39,
117,
4
] | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #ner #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Description\n\nA Named Entity Recognition model trained on a customer feedback data using DistilBert.\nPossible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:\n\n- PROD: for certain products\n- BRND: for brands\n- PERS: people names\n\nThe following tags are simply in place to help better categorize the previous tags\n\n- MATR: relating to materials, e.g. cloth, leather, seam, etc.\n- TIME: time related entities\n- MISC: any other entity that might skew the results### Usage"
] |
text-classification | transformers |
### Description
A Sentiment Analysis model trained on customer feedback data using DistilBert.
Possible sentiments are:
* negative
* neutral
* positive
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_sa_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_sa_v7_distil")
``` | {"language": "en", "license": "mit", "tags": ["sentiment-analysis"], "widget": [{"text": "I am disappointed in the terrible quality of my dress"}]} | CouchCat/ma_sa_v7_distil | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"sentiment-analysis",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #distilbert #text-classification #sentiment-analysis #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### Description
A Sentiment Analysis model trained on customer feedback data using DistilBert.
Possible sentiments are:
* negative
* neutral
* positive
### Usage
| [
"### Description\nA Sentiment Analysis model trained on customer feedback data using DistilBert.\nPossible sentiments are:\n* negative\n* neutral\n* positive",
"### Usage"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #sentiment-analysis #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Description\nA Sentiment Analysis model trained on customer feedback data using DistilBert.\nPossible sentiments are:\n* negative\n* neutral\n* positive",
"### Usage"
] | [
40,
28,
4
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #sentiment-analysis #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Description\nA Sentiment Analysis model trained on customer feedback data using DistilBert.\nPossible sentiments are:\n* negative\n* neutral\n* positive### Usage"
] |
text-generation | null |
Arthur Morgan DialoGPT Model | {"tags": ["conversational"]} | Coyotl/DialoGPT-test-last-arthurmorgan | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
|
Arthur Morgan DialoGPT Model | [] | [
"TAGS\n#conversational #region-us \n"
] | [
8
] | [
"TAGS\n#conversational #region-us \n"
] |
text-generation | transformers |
# Arthur Morgan DialoGPT Model | {"tags": ["conversational"]} | Coyotl/DialoGPT-test2-arthurmorgan | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Arthur Morgan DialoGPT Model | [
"# Arthur Morgan DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Arthur Morgan DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Arthur Morgan DialoGPT Model"
] |
text-generation | null |
# DialoGPT Arthur Morgan | {"tags": ["conversational"]} | Coyotl/DialoGPT-test3-arthurmorgan | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
|
# DialoGPT Arthur Morgan | [
"# DialoGPT Arthur Morgan"
] | [
"TAGS\n#conversational #region-us \n",
"# DialoGPT Arthur Morgan"
] | [
8,
6
] | [
"TAGS\n#conversational #region-us \n# DialoGPT Arthur Morgan"
] |
text-generation | transformers | @Piglin Talks Harry Potter | {"tags": ["conversational"]} | CracklesCreeper/Piglin-Talks-Harry-Potter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| @Piglin Talks Harry Potter | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction | sentence-transformers |
# A model.
| {"license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "feature-extraction"} | Craig/mGqFiPhu | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #transformers #license-apache-2.0 #endpoints_compatible #region-us
|
# A model.
| [
"# A model."
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #transformers #license-apache-2.0 #endpoints_compatible #region-us \n",
"# A model."
] | [
32,
4
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #transformers #license-apache-2.0 #endpoints_compatible #region-us \n# A model."
] |
feature-extraction | sentence-transformers |
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This is a clone of the original model, with `pipeline_tag` metadata changed to `feature-extraction`, so it can just return the embedded vector. Otherwise it is unchanged.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | {"license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "feature-extraction"} | Craig/paraphrase-MiniLM-L6-v2 | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1908.10084"
] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-1908.10084 #license-apache-2.0 #endpoints_compatible #region-us
|
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This is a clone of the original model, with 'pipeline_tag' metadata changed to 'feature-extraction', so it can just return the embedded vector. Otherwise it is unchanged.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
This model was trained by sentence-transformers.
If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
| [
"# sentence-transformers/paraphrase-MiniLM-L6-v2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nThis is a clone of the original model, with 'pipeline_tag' metadata changed to 'feature-extraction', so it can just return the embedded vector. Otherwise it is unchanged.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors\n\nThis model was trained by sentence-transformers. \n \nIf you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-1908.10084 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# sentence-transformers/paraphrase-MiniLM-L6-v2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nThis is a clone of the original model, with 'pipeline_tag' metadata changed to 'feature-extraction', so it can just return the embedded vector. Otherwise it is unchanged.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors\n\nThis model was trained by sentence-transformers. \n \nIf you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:"
] | [
49,
90,
30,
58,
26,
5,
43
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-1908.10084 #license-apache-2.0 #endpoints_compatible #region-us \n# sentence-transformers/paraphrase-MiniLM-L6-v2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nThis is a clone of the original model, with 'pipeline_tag' metadata changed to 'feature-extraction', so it can just return the embedded vector. Otherwise it is unchanged.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Full Model Architecture## Citing & Authors\n\nThis model was trained by sentence-transformers. \n \nIf you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:"
] |
text-classification | transformers |
# Model Finetuned from BERT-base for
- Problem type: Multi-class Classification
- Model ID: 25805800
## Validation Metrics
- Loss: 0.4422711133956909
- Accuracy: 0.8615328555811976
- Macro F1: 0.8642434650461513
- Micro F1: 0.8615328555811976
- Weighted F1: 0.8617743626671308
- Macro Precision: 0.8649112225076049
- Micro Precision: 0.8615328555811976
- Weighted Precision: 0.8625407179375096
- Macro Recall: 0.8640777539828228
- Micro Recall: 0.8615328555811976
- Weighted Recall: 0.8615328555811976
## Usage
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Crasher222/kaggle-comp-test")
tokenizer = AutoTokenizer.from_pretrained("Crasher222/kaggle-comp-test")
inputs = tokenizer("I am in love with you", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Crasher222/autonlp-data-kaggle-test"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 60.744727079482495} | Crasher222/kaggle-comp-test | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Crasher222/autonlp-data-kaggle-test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Finetuned from BERT-base for
- Problem type: Multi-class Classification
- Model ID: 25805800
## Validation Metrics
- Loss: 0.4422711133956909
- Accuracy: 0.8615328555811976
- Macro F1: 0.8642434650461513
- Micro F1: 0.8615328555811976
- Weighted F1: 0.8617743626671308
- Macro Precision: 0.8649112225076049
- Micro Precision: 0.8615328555811976
- Weighted Precision: 0.8625407179375096
- Macro Recall: 0.8640777539828228
- Micro Recall: 0.8615328555811976
- Weighted Recall: 0.8615328555811976
## Usage
| [
"# Model Finetuned from BERT-base for\n\n- Problem type: Multi-class Classification\n- Model ID: 25805800",
"## Validation Metrics\n\n- Loss: 0.4422711133956909\n- Accuracy: 0.8615328555811976\n- Macro F1: 0.8642434650461513\n- Micro F1: 0.8615328555811976\n- Weighted F1: 0.8617743626671308\n- Macro Precision: 0.8649112225076049\n- Micro Precision: 0.8615328555811976\n- Weighted Precision: 0.8625407179375096\n- Macro Recall: 0.8640777539828228\n- Micro Recall: 0.8615328555811976\n- Weighted Recall: 0.8615328555811976",
"## Usage"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Crasher222/autonlp-data-kaggle-test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Finetuned from BERT-base for\n\n- Problem type: Multi-class Classification\n- Model ID: 25805800",
"## Validation Metrics\n\n- Loss: 0.4422711133956909\n- Accuracy: 0.8615328555811976\n- Macro F1: 0.8642434650461513\n- Micro F1: 0.8615328555811976\n- Weighted F1: 0.8617743626671308\n- Macro Precision: 0.8649112225076049\n- Micro Precision: 0.8615328555811976\n- Weighted Precision: 0.8625407179375096\n- Macro Recall: 0.8640777539828228\n- Micro Recall: 0.8615328555811976\n- Weighted Recall: 0.8615328555811976",
"## Usage"
] | [
61,
26,
176,
3
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Crasher222/autonlp-data-kaggle-test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Finetuned from BERT-base for\n\n- Problem type: Multi-class Classification\n- Model ID: 25805800## Validation Metrics\n\n- Loss: 0.4422711133956909\n- Accuracy: 0.8615328555811976\n- Macro F1: 0.8642434650461513\n- Micro F1: 0.8615328555811976\n- Weighted F1: 0.8617743626671308\n- Macro Precision: 0.8649112225076049\n- Micro Precision: 0.8615328555811976\n- Weighted Precision: 0.8625407179375096\n- Macro Recall: 0.8640777539828228\n- Micro Recall: 0.8615328555811976\n- Weighted Recall: 0.8615328555811976## Usage"
] |
text-generation | transformers | hello
| {} | CrisLeaf/generador-de-historias-de-tolkien | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| hello
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
36
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |