pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-conll2003_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2003_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:pos/conll2003", "adapter-transformers", "token-classification"], "datasets": ["conll2003"]} | AdapterHub/roberta-base-pf-conll2003_pos | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:pos/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base
An adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
50,
77,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-copa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-copa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/copa", "adapter-transformers"]} | AdapterHub/roberta-base-pf-copa | null | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/copa",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
36,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-cosmos_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cosmos_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/cosmosqa", "adapter-transformers"], "datasets": ["cosmos_qa"]} | AdapterHub/roberta-base-pf-cosmos_qa | null | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/cosmosqa",
"en",
"dataset:cosmos_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
45,
73,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-cq` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/cq", "adapter-transformers"]} | AdapterHub/roberta-base-pf-cq | null | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/cq",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
40,
70,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-drop` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-drop", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["drop"]} | AdapterHub/roberta-base-pf-drop | null | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:drop",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base
An adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
34,
65,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-duorc_p` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-duorc_p", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["duorc"]} | AdapterHub/roberta-base-pf-duorc_p | null | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base
An adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
35,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-duorc_s` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-duorc_s", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["duorc"]} | AdapterHub/roberta-base-pf-duorc_s | null | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base
An adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
35,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-emo` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-emo", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["emo"]} | AdapterHub/roberta-base-pf-emo | null | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:emo",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base
An adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
35,
66,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-emotion` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-emotion", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["emotion"]} | AdapterHub/roberta-base-pf-emotion | null | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:emotion",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base
An adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
34,
64,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-fce_error_detection` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-fce_error_detection", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ged/fce", "adapter-transformers"], "datasets": ["fce_error_detection"]} | AdapterHub/roberta-base-pf-fce_error_detection | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:ged/fce",
"en",
"dataset:fce_error_detection",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base
An adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
50,
74,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-hellaswag` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-hellaswag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/hellaswag", "adapter-transformers"], "datasets": ["hellaswag"]} | AdapterHub/roberta-base-pf-hellaswag | null | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/hellaswag",
"en",
"dataset:hellaswag",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
47,
75,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-hotpotqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-hotpotqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["hotpot_qa"]} | AdapterHub/roberta-base-pf-hotpotqa | null | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:hotpot_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
38,
71,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-imdb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/imdb", "adapter-transformers"], "datasets": ["imdb"]} | AdapterHub/roberta-base-pf-imdb | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sentiment/imdb",
"en",
"dataset:imdb",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
45,
68,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-mit_movie_trivia` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mit_movie_trivia", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ner/mit_movie_trivia", "adapter-transformers"]} | AdapterHub/roberta-base-pf-mit_movie_trivia | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:ner/mit_movie_trivia",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base
An adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
44,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-mnli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mnli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/multinli", "adapter-transformers"], "datasets": ["multi_nli"]} | AdapterHub/roberta-base-pf-mnli | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/multinli",
"en",
"dataset:multi_nli",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
49,
70,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-mrpc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mrpc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sts/mrpc", "adapter-transformers"]} | AdapterHub/roberta-base-pf-mrpc | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sts/mrpc",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
39,
68,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-multirc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "adapterhub:rc/multirc", "roberta", "adapter-transformers"]} | AdapterHub/roberta-base-pf-multirc | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:rc/multirc",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base
An adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
39,
68,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-newsqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-newsqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["newsqa"]} | AdapterHub/roberta-base-pf-newsqa | null | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:newsqa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
35,
67,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-pmb_sem_tagging` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-pmb_sem_tagging", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:semtag/pmb", "adapter-transformers"]} | AdapterHub/roberta-base-pf-pmb_sem_tagging | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:semtag/pmb",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base
An adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
41,
77,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-qnli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-qnli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/qnli", "adapter-transformers"]} | AdapterHub/roberta-base-pf-qnli | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/qnli",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
41,
71,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-qqp` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-qqp", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "adapter-transformers", "adapterhub:sts/qqp", "roberta"]} | AdapterHub/roberta-base-pf-qqp | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sts/qqp",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
40,
70,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-quail` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-quail", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["quail"]} | AdapterHub/roberta-base-pf-quail | null | [
"adapter-transformers",
"roberta",
"en",
"dataset:quail",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base
An adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
31,
67,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-quartz` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-quartz", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["quartz"]} | AdapterHub/roberta-base-pf-quartz | null | [
"adapter-transformers",
"roberta",
"en",
"dataset:quartz",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base
An adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
30,
65,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-quoref` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-quoref", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["quoref"]} | AdapterHub/roberta-base-pf-quoref | null | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:quoref",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base
An adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
36,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-race` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-race", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["adapterhub:rc/race", "roberta", "adapter-transformers"], "datasets": ["race"]} | AdapterHub/roberta-base-pf-race | null | [
"adapter-transformers",
"roberta",
"adapterhub:rc/race",
"en",
"dataset:race",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base
An adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
39,
67,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-record` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-record", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:rc/record", "adapter-transformers"]} | AdapterHub/roberta-base-pf-record | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:rc/record",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base
An adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
38,
66,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-rotten_tomatoes` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-rotten_tomatoes", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/rotten_tomatoes", "adapter-transformers"], "datasets": ["rotten_tomatoes"]} | AdapterHub/roberta-base-pf-rotten_tomatoes | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sentiment/rotten_tomatoes",
"en",
"dataset:rotten_tomatoes",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
47,
70,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-rte` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-rte", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/rte", "adapter-transformers"]} | AdapterHub/roberta-base-pf-rte | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/rte",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
39,
67,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-scicite` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-scicite", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["scicite"]} | AdapterHub/roberta-base-pf-scicite | null | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:scicite",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base
An adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
35,
66,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-scitail` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-scitail", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/scitail", "adapter-transformers"], "datasets": ["scitail"]} | AdapterHub/roberta-base-pf-scitail | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/scitail",
"en",
"dataset:scitail",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
46,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-sick` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-sick", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers", "adapterhub:nli/sick", "text-classification"], "datasets": ["sick"]} | AdapterHub/roberta-base-pf-sick | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/sick",
"en",
"dataset:sick",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
44,
67,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-snli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-snli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["snli"]} | AdapterHub/roberta-base-pf-snli | null | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:snli",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base
An adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
36,
68,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-social_i_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-social_i_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["social_i_qa"]} | AdapterHub/roberta-base-pf-social_i_qa | null | [
"adapter-transformers",
"roberta",
"en",
"dataset:social_i_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
35,
75,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-squad` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-squad", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/squad1", "adapter-transformers"], "datasets": ["squad"]} | AdapterHub/roberta-base-pf-squad | null | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/squad1",
"en",
"dataset:squad",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
45,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-squad_v2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-squad_v2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/squad2", "adapter-transformers"], "datasets": ["squad_v2"]} | AdapterHub/roberta-base-pf-squad_v2 | null | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/squad2",
"en",
"dataset:squad_v2",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
48,
72,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-sst2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-sst2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/sst-2", "adapter-transformers"]} | AdapterHub/roberta-base-pf-sst2 | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sentiment/sst-2",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
41,
71,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-stsb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-stsb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sts/sts-b", "adapter-transformers"]} | AdapterHub/roberta-base-pf-stsb | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sts/sts-b",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
40,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-swag` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-swag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["swag"]} | AdapterHub/roberta-base-pf-swag | null | [
"adapter-transformers",
"roberta",
"en",
"dataset:swag",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base
An adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
31,
67,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-trec` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-trec", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["trec"]} | AdapterHub/roberta-base-pf-trec | null | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:trec",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base
An adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
35,
66,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-ud_deprel` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_deprel", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:deprel/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | AdapterHub/roberta-base-pf-ud_deprel | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:deprel/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base
An adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
51,
76,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-ud_en_ewt` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_en_ewt", source="hf", set_active=True)
```
## Architecture & Training
This adapter was trained using adapter-transformer's example script for dependency parsing.
See https://github.com/Adapter-Hub/adapter-transformers/tree/master/examples/dependency-parsing.
## Evaluation results
Scores achieved by dependency parsing adapters on the test set of UD English EWT after training:
| Model | UAS | LAS |
| --- | --- | --- |
| `bert-base-uncased` | 91.74 | 89.15 |
| `roberta-base` | 91.43 | 88.43 |
## Citation
<!-- Add some description here --> | {"language": ["en"], "tags": ["roberta", "adapterhub:dp/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | AdapterHub/roberta-base-pf-ud_en_ewt | null | [
"adapter-transformers",
"roberta",
"adapterhub:dp/ud_ewt",
"en",
"dataset:universal_dependencies",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us
| Adapter 'AdapterHub/roberta-base-pf-ud\_en\_ewt' for roberta-base
=================================================================
An adapter for the 'roberta-base' model that was trained on the dp/ud\_ewt dataset and includes a prediction head for dependency parsing.
This adapter was created for usage with the adapter-transformers library.
Usage
-----
First, install 'adapter-transformers':
*Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More*
Now, the adapter can be loaded and activated like this:
Architecture & Training
-----------------------
This adapter was trained using adapter-transformer's example script for dependency parsing.
See URL
Evaluation results
------------------
Scores achieved by dependency parsing adapters on the test set of UD English EWT after training:
Model: 'bert-base-uncased', UAS: 91.74, LAS: 89.15
Model: 'roberta-base', UAS: 91.43, LAS: 88.43
| [] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] | [
35
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-ud_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:pos/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | AdapterHub/roberta-base-pf-ud_pos | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:pos/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base
An adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
50,
74,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-wic` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wic", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:wordsence/wic", "adapter-transformers"]} | AdapterHub/roberta-base-pf-wic | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:wordsence/wic",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base
An adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
40,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-wikihop` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wikihop", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/wikihop", "adapter-transformers"]} | AdapterHub/roberta-base-pf-wikihop | null | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/wikihop",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
41,
72,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-winogrande` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-winogrande", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/winogrande", "adapter-transformers"], "datasets": ["winogrande"]} | AdapterHub/roberta-base-pf-winogrande | null | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/winogrande",
"en",
"dataset:winogrande",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
47,
75,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-wnut_17` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wnut_17", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["token-classification", "roberta", "adapter-transformers"], "datasets": ["wnut_17"]} | AdapterHub/roberta-base-pf-wnut_17 | null | [
"adapter-transformers",
"roberta",
"token-classification",
"en",
"dataset:wnut_17",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base
An adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
37,
71,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-yelp_polarity` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-yelp_polarity", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["yelp_polarity"]} | AdapterHub/roberta-base-pf-yelp_polarity | null | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:yelp_polarity",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base
An adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
The training code for this adapter is available at URL
In particular, training configurations for all tasks can be found here.
## Evaluation results
Refer to the paper for more information on results.
If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
| [
"# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.",
"## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] | [
38,
72,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | AdharshJolly/HarryPotterBot-Model | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
text-classification | transformers |
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Adi2K/autonlp-Priv-Consent-12592372
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "eng", "datasets": ["Adi2K/autonlp-data-Priv-Consent"], "widget": [{"text": "You can control cookies and tracking tools. To learn how to manage how we - and our vendors - use cookies and other tracking tools, please click here."}]} | Adi2K/Priv-Consent | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"eng",
"dataset:Adi2K/autonlp-data-Priv-Consent",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eng"
] | TAGS
#transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us
|
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372",
"## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.9138655462184874\n- Precision: 0.9087136929460581\n- Recall: 0.9201680672268907\n- AUC: 0.9690346726926065\n- F1: 0.9144050104384133",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372",
"## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.9138655462184874\n- Precision: 0.9087136929460581\n- Recall: 0.9201680672268907\n- AUC: 0.9690346726926065\n- F1: 0.9144050104384133",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
48,
17,
98,
16
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us \n# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.9138655462184874\n- Precision: 0.9087136929460581\n- Recall: 0.9201680672268907\n- AUC: 0.9690346726926065\n- F1: 0.9144050104384133## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9314
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.686 | 0.16 | 20 | 13.6565 | 1.0 |
| 8.0711 | 0.32 | 40 | 12.5379 | 1.0 |
| 6.9967 | 0.48 | 60 | 9.7215 | 1.0 |
| 5.2368 | 0.64 | 80 | 5.8459 | 1.0 |
| 3.4499 | 0.8 | 100 | 3.3413 | 1.0 |
| 3.1261 | 0.96 | 120 | 3.2858 | 1.0 |
| 3.0654 | 1.12 | 140 | 3.1945 | 1.0 |
| 3.0421 | 1.28 | 160 | 3.1296 | 1.0 |
| 3.0035 | 1.44 | 180 | 3.1172 | 1.0 |
| 3.0067 | 1.6 | 200 | 3.1217 | 1.0 |
| 2.9867 | 1.76 | 220 | 3.0715 | 1.0 |
| 2.9653 | 1.92 | 240 | 3.0747 | 1.0 |
| 2.9629 | 2.08 | 260 | 2.9984 | 1.0 |
| 2.9462 | 2.24 | 280 | 2.9991 | 1.0 |
| 2.9391 | 2.4 | 300 | 3.0391 | 1.0 |
| 2.934 | 2.56 | 320 | 2.9682 | 1.0 |
| 2.9193 | 2.72 | 340 | 2.9701 | 1.0 |
| 2.8985 | 2.88 | 360 | 2.9314 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | Adil617/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9314
* Wer: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
47,
128,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Harry Potter DialoGPT model | {"tags": ["conversational"]} | AdrianGzz/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT model | [
"# Harry Potter DialoGPT model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT model"
] |
text-generation | transformers | # DialoGPT Trained on the Speech of a Game Character
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Tsubomi: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | {"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"} | Aero/Tsubomi-Haruno | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # DialoGPT Trained on the Speech of a Game Character
| [
"# DialoGPT Trained on the Speech of a Game Character"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Trained on the Speech of a Game Character"
] | [
43,
12
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# DialoGPT Trained on the Speech of a Game Character"
] |
text-generation | null |
#HAL | {"tags": ["conversational"]} | AetherIT/DialoGPT-small-Hal | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
|
#HAL | [] | [
"TAGS\n#conversational #region-us \n"
] | [
8
] | [
"TAGS\n#conversational #region-us \n"
] |
image-classification | transformers |
# Tomato_Leaf_Classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bacterial_spot
![Bacterial_spot](images/Bacterial_spot.JPG)
#### Healthy
![Healthy](images/Healthy.JPG) | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | Aftabhussain/Tomato_Leaf_Classifier | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# Tomato_Leaf_Classifier
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### Bacterial_spot
!Bacterial_spot
#### Healthy
!Healthy | [
"# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Bacterial_spot\n\n!Bacterial_spot",
"#### Healthy\n\n!Healthy"
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Bacterial_spot\n\n!Bacterial_spot",
"#### Healthy\n\n!Healthy"
] | [
40,
45,
4,
11,
7
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.## Example Images#### Bacterial_spot\n\n!Bacterial_spot#### Healthy\n\n!Healthy"
] |
text2text-generation | transformers | A monolingual T5 model for Persian trained on OSCAR 21.09 (https://oscar-corpus.com/) corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Example code:
```
from transformers import T5ForConditionalGeneration,AutoTokenizer
import torch
model_name = "Ahmad/parsT5-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer.encode('دانش آموزان به <extra_id_0> میروند و <extra_id_1> میخوانند.', return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(input_ids)
for h in hypotheses:
print(tokenizer.decode(h))
```
Steps: 725000
Accuracy: 0.66
Training More?
========
To train the model further please refer to its github repository at:
https://github.com/puraminy/parsT5
| {} | Ahmad/parsT5-base | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| A monolingual T5 model for Persian trained on OSCAR 21.09 (URL corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Example code:
Steps: 725000
Accuracy: 0.66
Training More?
========
To train the model further please refer to its github repository at:
URL
| [] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
37
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation | transformers | A checkpoint for training Persian T5 model. This repository can be cloned and pre-training can be resumed. This model uses flax and is for training.
For more information and getting the training code please refer to:
https://github.com/puraminy/parsT5
| {} | Ahmad/parsT5 | null | [
"transformers",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| A checkpoint for training Persian T5 model. This repository can be cloned and pre-training can be resumed. This model uses flax and is for training.
For more information and getting the training code please refer to:
URL
| [] | [
"TAGS\n#transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
34
] | [
"TAGS\n#transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks.
LABEL_1: Positive
LABEL_2: Negative
LABEL_0: Neutral
This work is an integral component of my Master's degree thesis and represents the culmination of extensive research and labor.
If you wish to utilize the Tunisian-Dialect-Corpus or the TuniBert model, kindly refer to the directory provided. [huggingface.co/AhmedBou][github.com/BoulahiaAhmed] | {"language": ["ar"], "license": "apache-2.0", "tags": ["sentiment analysis", "classification", "arabic dialect", "tunisian dialect"]} | AhmedBou/TuniBert | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment analysis",
"classification",
"arabic dialect",
"tunisian dialect",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks.
LABEL_1: Positive
LABEL_2: Negative
LABEL_0: Neutral
This work is an integral component of my Master's degree thesis and represents the culmination of extensive research and labor.
If you wish to utilize the Tunisian-Dialect-Corpus or the TuniBert model, kindly refer to the directory provided. [URL | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
49
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
```
```
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mariancg-a-code-generation-transformer-model/code-generation-on-conala)](https://paperswithcode.com/sota/code-generation-on-conala?p=mariancg-a-code-generation-transformer-model)
```
```
# MarianCG: a code generation transformer model inspired by machine translation
This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.
MarianCG model and its implemetation with the code of training and the generated output is available at this repository:
https://github.com/AhmedSSoliman/MarianCG-NL-to-Code
CoNaLa Dataset for Code Generation is available at
https://huggingface.co/datasets/AhmedSSoliman/CoNaLa
This is the model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-CoNaLa
```python
# Model and Tokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
# Input (Natural Language) and Output (Python Code)
NL_input = "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"
output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
output_code = tokenizer.decode(output[0], skip_special_tokens=True)
```
This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-CoNaLa
---
Tasks:
- Translation
- Code Generation
- Text2Text Generation
- Text Generation
---
# Citation
We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite:
```
@article{soliman2022mariancg,
title={MarianCG: a code generation transformer model inspired by machine translation},
author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I},
journal={Journal of Engineering and Applied Science},
volume={69},
number={1},
pages={1--23},
year={2022},
publisher={SpringerOpen}
url={https://doi.org/10.1186/s44147-022-00159-4}
}
```
| {"widget": [{"text": "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"}, {"text": "check if all elements in list `mylist` are identical"}, {"text": "enable debug mode on flask application `app`"}, {"text": "getting the length of `my_tuple`"}, {"text": "find all files in directory \"/mydir\" with extension \".txt\""}]} | AhmedSSoliman/MarianCG-CoNaLa | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
|
![PWC](URL
# MarianCG: a code generation transformer model inspired by machine translation
This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.
MarianCG model and its implemetation with the code of training and the generated output is available at this repository:
URL
CoNaLa Dataset for Code Generation is available at
URL
This is the model is avialable on the huggingface hub URL
This model is available in spaces using gradio at: URL
---
Tasks:
- Translation
- Code Generation
- Text2Text Generation
- Text Generation
---
We now have a paper for this work and you can cite:
| [
"# MarianCG: a code generation transformer model inspired by machine translation\nThis model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.\n\nMarianCG model and its implemetation with the code of training and the generated output is available at this repository:\nURL\n\nCoNaLa Dataset for Code Generation is available at\nURL\n\nThis is the model is avialable on the huggingface hub URL\n\n\nThis model is available in spaces using gradio at: URL\n\n\n---\nTasks:\n- Translation\n- Code Generation\n- Text2Text Generation\n- Text Generation\n---\n\n\nWe now have a paper for this work and you can cite:"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# MarianCG: a code generation transformer model inspired by machine translation\nThis model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.\n\nMarianCG model and its implemetation with the code of training and the generated output is available at this repository:\nURL\n\nCoNaLa Dataset for Code Generation is available at\nURL\n\nThis is the model is avialable on the huggingface hub URL\n\n\nThis model is available in spaces using gradio at: URL\n\n\n---\nTasks:\n- Translation\n- Code Generation\n- Text2Text Generation\n- Text Generation\n---\n\n\nWe now have a paper for this work and you can cite:"
] | [
34,
256
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n# MarianCG: a code generation transformer model inspired by machine translation\nThis model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.\n\nMarianCG model and its implemetation with the code of training and the generated output is available at this repository:\nURL\n\nCoNaLa Dataset for Code Generation is available at\nURL\n\nThis is the model is avialable on the huggingface hub URL\n\n\nThis model is available in spaces using gradio at: URL\n\n\n---\nTasks:\n- Translation\n- Code Generation\n- Text2Text Generation\n- Text Generation\n---\n\n\nWe now have a paper for this work and you can cite:"
] |
text-generation | transformers |
# Back to the Future DialoGPT Model | {"tags": ["conversational"]} | AiPorter/DialoGPT-small-Back_to_the_future | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Back to the Future DialoGPT Model | [
"# Back to the Future DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Back to the Future DialoGPT Model"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Back to the Future DialoGPT Model"
] |
text-generation | transformers |
# Rick DialoGPT Model | {"tags": ["conversational"]} | Aibox/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model | [
"# Rick DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] | [
39,
6
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick DialoGPT Model"
] |
null | null | Trained on Stephen King's top 50 books as .txt files. | {} | Aidan8756/stephenKingModel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Trained on Stephen King's top 50 books as .txt files. | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bashkir-cv7_opt
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.
It achieves the following results on the evaluation set:
- Training Loss: 0.268400
- Validation Loss: 0.088252
- WER without LM: 0.085588
- WER with LM: 0.04440795062008041
- CER with LM: 0.010491234992390509
## Model description
Trained with this [jupiter notebook](https://drive.google.com/file/d/1KohDXZtKBWXVPZYlsLtqfxJGBzKmTtSh/view?usp=sharing)
## Intended uses & limitations
In order to reduce the number of characters, the following letters have been replaced or removed:
- 'я' -> 'йа'
- 'ю' -> 'йу'
- 'ё' -> 'йо'
- 'е' -> 'йэ' for first letter
- 'е' -> 'э' for other cases
- 'ъ' -> deleted
- 'ь' -> deleted
Therefore, in order to get the correct text, you need to do the reverse transformation and use the language model.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu113
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"language": ["ba"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bashkir-cv7_opt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ba"}, "metrics": [{"type": "wer", "value": 0.04440795062008041, "name": "Test WER"}, {"type": "cer", "value": 0.010491234992390509, "name": "Test CER"}]}]}]} | AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"ba",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ba"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #ba #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# wav2vec2-large-xls-r-300m-bashkir-cv7_opt
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.
It achieves the following results on the evaluation set:
- Training Loss: 0.268400
- Validation Loss: 0.088252
- WER without LM: 0.085588
- WER with LM: 0.04440795062008041
- CER with LM: 0.010491234992390509
## Model description
Trained with this jupiter notebook
## Intended uses & limitations
In order to reduce the number of characters, the following letters have been replaced or removed:
- 'я' -> 'йа'
- 'ю' -> 'йу'
- 'ё' -> 'йо'
- 'е' -> 'йэ' for first letter
- 'е' -> 'э' for other cases
- 'ъ' -> deleted
- 'ь' -> deleted
Therefore, in order to get the correct text, you need to do the reverse transformation and use the language model.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu113
- Datasets 1.18.2
- Tokenizers 0.10.3
| [
"# wav2vec2-large-xls-r-300m-bashkir-cv7_opt\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.\nIt achieves the following results on the evaluation set:\n- Training Loss: 0.268400\n- Validation Loss: 0.088252\n- WER without LM: 0.085588\n- WER with LM: 0.04440795062008041\n- CER with LM: 0.010491234992390509",
"## Model description\n\nTrained with this jupiter notebook",
"## Intended uses & limitations\n\nIn order to reduce the number of characters, the following letters have been replaced or removed:\n\n- 'я' -> 'йа'\n- 'ю' -> 'йу'\n- 'ё' -> 'йо'\n- 'е' -> 'йэ' for first letter\n- 'е' -> 'э' for other cases\n- 'ъ' -> deleted\n- 'ь' -> deleted\n\nTherefore, in order to get the correct text, you need to do the reverse transformation and use the language model.",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 300\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.1\n- Pytorch 1.10.0+cu113\n- Datasets 1.18.2\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #ba #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# wav2vec2-large-xls-r-300m-bashkir-cv7_opt\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.\nIt achieves the following results on the evaluation set:\n- Training Loss: 0.268400\n- Validation Loss: 0.088252\n- WER without LM: 0.085588\n- WER with LM: 0.04440795062008041\n- CER with LM: 0.010491234992390509",
"## Model description\n\nTrained with this jupiter notebook",
"## Intended uses & limitations\n\nIn order to reduce the number of characters, the following letters have been replaced or removed:\n\n- 'я' -> 'йа'\n- 'ю' -> 'йу'\n- 'ё' -> 'йо'\n- 'е' -> 'йэ' for first letter\n- 'е' -> 'э' for other cases\n- 'ъ' -> deleted\n- 'ь' -> deleted\n\nTherefore, in order to get the correct text, you need to do the reverse transformation and use the language model.",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 300\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.1\n- Pytorch 1.10.0+cu113\n- Datasets 1.18.2\n- Tokenizers 0.10.3"
] | [
104,
151,
9,
116,
9,
4,
133,
44
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #ba #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n# wav2vec2-large-xls-r-300m-bashkir-cv7_opt\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.\nIt achieves the following results on the evaluation set:\n- Training Loss: 0.268400\n- Validation Loss: 0.088252\n- WER without LM: 0.085588\n- WER with LM: 0.04440795062008041\n- CER with LM: 0.010491234992390509## Model description\n\nTrained with this jupiter notebook## Intended uses & limitations\n\nIn order to reduce the number of characters, the following letters have been replaced or removed:\n\n- 'я' -> 'йа'\n- 'ю' -> 'йу'\n- 'ё' -> 'йо'\n- 'е' -> 'йэ' for first letter\n- 'е' -> 'э' for other cases\n- 'ъ' -> deleted\n- 'ь' -> deleted\n\nTherefore, in order to get the correct text, you need to do the reverse transformation and use the language model.## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 300\n- num_epochs: 50\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.16.1\n- Pytorch 1.10.0+cu113\n- Datasets 1.18.2\n- Tokenizers 0.10.3"
] |
text2text-generation | transformers | you can use this model with simpletransfomers.
```
!pip install simpletransformers
from simpletransformers.t5 import T5Model
model = T5Model("mt5", "AimB/mT5-en-kr-natural")
print(model.predict(["I feel good today"]))
print(model.predict(["우리집 고양이는 세상에서 제일 귀엽습니다"]))
``` | {} | AimB/mT5-en-kr-natural | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| you can use this model with simpletransfomers.
| [] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
37
] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35248482
- CO2 Emissions (in grams): 7.989144645413398
## Validation Metrics
- Loss: 0.13783401250839233
- Accuracy: 0.9728654124457308
- Macro F1: 0.949537871674076
- Micro F1: 0.9728654124457308
- Weighted F1: 0.9732422812610365
- Macro Precision: 0.9380372699332605
- Micro Precision: 0.9728654124457308
- Weighted Precision: 0.974548513256663
- Macro Recall: 0.9689346153591594
- Micro Recall: 0.9728654124457308
- Weighted Recall: 0.9728654124457308
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Aimendo/autonlp-triage-35248482
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Aimendo/autonlp-data-triage"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 7.989144645413398} | Aimendo/autonlp-triage-35248482 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Aimendo/autonlp-data-triage",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35248482
- CO2 Emissions (in grams): 7.989144645413398
## Validation Metrics
- Loss: 0.13783401250839233
- Accuracy: 0.9728654124457308
- Macro F1: 0.949537871674076
- Micro F1: 0.9728654124457308
- Weighted F1: 0.9732422812610365
- Macro Precision: 0.9380372699332605
- Micro Precision: 0.9728654124457308
- Weighted Precision: 0.974548513256663
- Macro Recall: 0.9689346153591594
- Micro Recall: 0.9728654124457308
- Weighted Recall: 0.9728654124457308
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams): 7.989144645413398",
"## Validation Metrics\n\n- Loss: 0.13783401250839233\n- Accuracy: 0.9728654124457308\n- Macro F1: 0.949537871674076\n- Micro F1: 0.9728654124457308\n- Weighted F1: 0.9732422812610365\n- Macro Precision: 0.9380372699332605\n- Micro Precision: 0.9728654124457308\n- Weighted Precision: 0.974548513256663\n- Macro Recall: 0.9689346153591594\n- Micro Recall: 0.9728654124457308\n- Weighted Recall: 0.9728654124457308",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams): 7.989144645413398",
"## Validation Metrics\n\n- Loss: 0.13783401250839233\n- Accuracy: 0.9728654124457308\n- Macro F1: 0.949537871674076\n- Micro F1: 0.9728654124457308\n- Weighted F1: 0.9732422812610365\n- Macro Precision: 0.9380372699332605\n- Micro Precision: 0.9728654124457308\n- Weighted Precision: 0.974548513256663\n- Macro Recall: 0.9689346153591594\n- Micro Recall: 0.9728654124457308\n- Weighted Recall: 0.9728654124457308",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
58,
43,
173,
16
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams): 7.989144645413398## Validation Metrics\n\n- Loss: 0.13783401250839233\n- Accuracy: 0.9728654124457308\n- Macro F1: 0.949537871674076\n- Micro F1: 0.9728654124457308\n- Weighted F1: 0.9732422812610365\n- Macro Precision: 0.9380372699332605\n- Micro Precision: 0.9728654124457308\n- Weighted Precision: 0.974548513256663\n- Macro Recall: 0.9689346153591594\n- Micro Recall: 0.9728654124457308\n- Weighted Recall: 0.9728654124457308## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.9296904373981703
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Ajay191191/autonlp-Test-530014983
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Ajay191191/autonlp-data-Test"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 55.10196329868386} | Ajay191191/autonlp-Test-530014983 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Ajay191191/autonlp-data-Test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.9296904373981703
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 55.10196329868386",
"## Validation Metrics\n\n- Loss: 0.23171618580818176\n- Accuracy: 0.9298837645294338\n- Precision: 0.9314414866901055\n- Recall: 0.9279459594696022\n- AUC: 0.979447403984557\n- F1: 0.9296904373981703",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 55.10196329868386",
"## Validation Metrics\n\n- Loss: 0.23171618580818176\n- Accuracy: 0.9298837645294338\n- Precision: 0.9314414866901055\n- Recall: 0.9279459594696022\n- AUC: 0.979447403984557\n- F1: 0.9296904373981703",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
60,
42,
94,
16
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 55.10196329868386## Validation Metrics\n\n- Loss: 0.23171618580818176\n- Accuracy: 0.9298837645294338\n- Precision: 0.9314414866901055\n- Recall: 0.9279459594696022\n- AUC: 0.979447403984557\n- F1: 0.9296904373981703## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 16122692
## Validation Metrics
- Loss: 1.1877621412277222
- Rouge1: 42.0713
- Rouge2: 23.3043
- RougeL: 37.3755
- RougeLsum: 37.8961
- Gen Len: 60.7117
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Ajaykannan6/autonlp-manthan-16122692
``` | {"language": "unk", "tags": "autonlp", "datasets": ["Ajaykannan6/autonlp-data-manthan"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | Ajaykannan6/autonlp-manthan-16122692 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autonlp",
"unk",
"dataset:Ajaykannan6/autonlp-data-manthan",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"unk"
] | TAGS
#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 16122692
## Validation Metrics
- Loss: 1.1877621412277222
- Rouge1: 42.0713
- Rouge2: 23.3043
- RougeL: 37.3755
- RougeLsum: 37.8961
- Gen Len: 60.7117
## Usage
You can use cURL to access this model:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692",
"## Validation Metrics\n\n- Loss: 1.1877621412277222\n- Rouge1: 42.0713\n- Rouge2: 23.3043\n- RougeL: 37.3755\n- RougeLsum: 37.8961\n- Gen Len: 60.7117",
"## Usage\n\nYou can use cURL to access this model:"
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692",
"## Validation Metrics\n\n- Loss: 1.1877621412277222\n- Rouge1: 42.0713\n- Rouge2: 23.3043\n- RougeL: 37.3755\n- RougeLsum: 37.8961\n- Gen Len: 60.7117",
"## Usage\n\nYou can use cURL to access this model:"
] | [
55,
22,
61,
12
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692## Validation Metrics\n\n- Loss: 1.1877621412277222\n- Rouge1: 42.0713\n- Rouge2: 23.3043\n- RougeL: 37.3755\n- RougeLsum: 37.8961\n- Gen Len: 60.7117## Usage\n\nYou can use cURL to access this model:"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 8248 | 0.8813 |
| 0.6333 | 2.0 | 16496 | 0.8042 |
| 0.4372 | 3.0 | 24744 | 0.9492 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "albert-base-v2-finetuned-squad", "results": []}]} | Akari/albert-base-v2-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
| albert-base-v2-finetuned-squad
==============================
This model is a fine-tuned version of albert-base-v2 on the squad\_v2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9492
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.7.1
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.7.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.7.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
48,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.7.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0915 | 1.0 | 2346 | 7.0517 |
| 6.905 | 2.0 | 4692 | 6.8735 |
| 6.8565 | 3.0 | 7038 | 6.8924 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]} | Akash7897/bert-base-cased-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-base-cased-wikitext2
=========================
This model is a fine-tuned version of bert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.8544
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
45,
103,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0789
- Matthews Correlation: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1472 | 1.0 | 535 | 0.8407 | 0.4915 |
| 0.1365 | 2.0 | 1070 | 0.9236 | 0.4990 |
| 0.1194 | 3.0 | 1605 | 0.8753 | 0.4953 |
| 0.1313 | 4.0 | 2140 | 0.9684 | 0.5013 |
| 0.0895 | 5.0 | 2675 | 1.0789 | 0.5222 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.522211073949747, "name": "Matthews Correlation"}]}]}]} | Akash7897/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0789
* Matthews Correlation: 0.5222
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
56,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3010
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1793 | 1.0 | 4210 | 0.3010 | 0.9037 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.9036697247706422, "name": "Accuracy"}]}]}]} | Akash7897/distilbert-base-uncased-finetuned-sst2 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-sst2
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3010
* Accuracy: 0.9037
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
56,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.558 | 1.0 | 2249 | 6.4672 |
| 6.1918 | 2.0 | 4498 | 6.1970 |
| 6.0019 | 3.0 | 6747 | 6.1079 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-wikitext2", "results": []}]} | Akash7897/gpt2-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| gpt2-wikitext2
==============
This model is a fine-tuned version of gpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.1079
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] | [
49,
103,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
automatic-speech-recognition | transformers |
# Akashpb13/Central_kurdish_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
- Loss: 0.348580
- Wer: 0.401147
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Central Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000095637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 500 | 5.097800 | 2.190326 | 1.001207 |
| 1000 | 0.797500 | 0.331392 | 0.576819 |
| 1500 | 0.405100 | 0.262009 | 0.549049 |
| 2000 | 0.322100 | 0.248178 | 0.479626 |
| 2500 | 0.264600 | 0.258866 | 0.488983 |
| 3000 | 0.228300 | 0.261523 | 0.469665 |
| 3500 | 0.201000 | 0.270135 | 0.451856 |
| 4000 | 0.180900 | 0.279302 | 0.448536 |
| 4500 | 0.163800 | 0.280921 | 0.459704 |
| 5000 | 0.147300 | 0.319249 | 0.471778 |
| 5500 | 0.137600 | 0.289546 | 0.449140 |
| 6000 | 0.132000 | 0.311350 | 0.458195 |
| 6500 | 0.117100 | 0.316726 | 0.432840 |
| 7000 | 0.109200 | 0.302210 | 0.439481 |
| 7500 | 0.104900 | 0.325913 | 0.439481 |
| 8000 | 0.097500 | 0.329446 | 0.431935 |
| 8500 | 0.088600 | 0.345259 | 0.425898 |
| 9000 | 0.084900 | 0.342891 | 0.428313 |
| 9500 | 0.080900 | 0.353081 | 0.424389 |
| 10000 | 0.075600 | 0.347063 | 0.424992 |
| 10500 | 0.072800 | 0.330086 | 0.424691 |
| 11000 | 0.068100 | 0.350658 | 0.421974 |
| 11500 | 0.064700 | 0.342949 | 0.413522 |
| 12000 | 0.061500 | 0.341704 | 0.415334 |
| 12500 | 0.059500 | 0.346279 | 0.411410 |
| 13000 | 0.057400 | 0.349901 | 0.407184 |
| 13500 | 0.056400 | 0.347733 | 0.402656 |
| 14000 | 0.053300 | 0.344899 | 0.405976 |
| 14500 | 0.052900 | 0.346708 | 0.402656 |
| 15000 | 0.050600 | 0.344118 | 0.400845 |
| 15500 | 0.050200 | 0.348396 | 0.402958 |
| 16000 | 0.049800 | 0.348312 | 0.401751 |
| 16500 | 0.051900 | 0.348372 | 0.401147 |
| 17000 | 0.049800 | 0.348580 | 0.401147 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.1
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Central_kurdish_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ckb --split test
```
| {"language": ["ckb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ckb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Central_kurdish_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ckb"}, "metrics": [{"type": "wer", "value": 0.36754389884276845, "name": "Test WER"}, {"type": "cer", "value": 0.07827896768334217, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ckb"}, "metrics": [{"type": "wer", "value": 0.36754389884276845, "name": "Test WER"}, {"type": "cer", "value": 0.07827896768334217, "name": "Test CER"}]}]}]} | Akashpb13/Central_kurdish_xlsr | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ckb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ckb"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Akashpb13/Central\_kurdish\_xlsr
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
* Loss: 0.348580
* Wer: 0.401147
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice Central Kurdish URL, URL, URL, URL, and URL
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000095637994662983496
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 2
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.1
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.1\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.1\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
111,
133,
5,
47,
34
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.1\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
automatic-speech-recognition | transformers |
# Akashpb13/Galician_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.137096
- Wer: 0.196230
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Galician train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.038100 | 3.035432 | 1.000000 |
| 1000 | 2.180000 | 0.406300 | 0.557964 |
| 1500 | 0.331700 | 0.153797 | 0.262394 |
| 2000 | 0.171600 | 0.145268 | 0.235627 |
| 2500 | 0.125900 | 0.136622 | 0.228087 |
| 3000 | 0.105400 | 0.131650 | 0.224128 |
| 3500 | 0.087600 | 0.141032 | 0.217531 |
| 4000 | 0.078300 | 0.143675 | 0.214515 |
| 4500 | 0.070000 | 0.144607 | 0.208106 |
| 5000 | 0.061500 | 0.135259 | 0.202828 |
| 5500 | 0.055600 | 0.130638 | 0.203959 |
| 6000 | 0.050500 | 0.137416 | 0.202451 |
| 6500 | 0.046600 | 0.140379 | 0.200000 |
| 7000 | 0.040800 | 0.140179 | 0.200377 |
| 7500 | 0.041000 | 0.138089 | 0.196795 |
| 8000 | 0.038400 | 0.136927 | 0.197172 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Galician_xlsr --dataset mozilla-foundation/common_voice_8_0 --config gl --split test
```
| {"language": ["gl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "gl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Galician_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.11308483789555426, "name": "Test WER"}, {"type": "cer", "value": 0.023982371794871796, "name": "Test CER"}, {"type": "wer", "value": 11.31, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 0.11308483789555426, "name": "Test WER"}, {"type": "cer", "value": 0.023982371794871796, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 39.05, "name": "Test WER"}]}]}]} | Akashpb13/Galician_xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"gl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"gl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Akashpb13/Galician\_xlsr
========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
* Loss: 0.137096
* Wer: 0.196230
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice Galician URL, URL, URL, URL, and URL
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000096
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 2
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
103,
123,
5,
47,
34
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
automatic-speech-recognition | transformers |
# Akashpb13/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.275118
- Wer: 0.329955
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.175900 | 2.750914 | 1.000000 |
| 1000 | 1.028700 | 0.338649 | 0.497999 |
| 1500 | 0.332200 | 0.246896 | 0.402241 |
| 2000 | 0.227300 | 0.239640 | 0.395839 |
| 2500 | 0.175000 | 0.239577 | 0.373966 |
| 3000 | 0.140400 | 0.243272 | 0.356095 |
| 3500 | 0.119200 | 0.263761 | 0.365164 |
| 4000 | 0.099300 | 0.265954 | 0.353428 |
| 4500 | 0.084400 | 0.276367 | 0.349693 |
| 5000 | 0.073700 | 0.282631 | 0.343825 |
| 5500 | 0.068000 | 0.282344 | 0.341158 |
| 6000 | 0.064500 | 0.281591 | 0.342491 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
| {"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "ha", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Hausa_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}]}]} | Akashpb13/Hausa_xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"ha",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ha"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Akashpb13/Hausa\_xlsr
=====================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
* Loss: 0.275118
* Wer: 0.329955
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice Hausa URL, URL, URL, URL and URL
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000096
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 2
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
106,
123,
5,
47,
34
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
automatic-speech-recognition | transformers |
# Akashpb13/Kabyle_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
- Loss: 0.159032
- Wer: 0.187934
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Kabyle train.tsv. Only 50,000 records were sampled randomly and trained due to huge size of dataset.
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 4
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 500 | 7.199800 | 3.130564 | 1.000000 |
| 1000 | 1.570200 | 0.718097 | 0.734682 |
| 1500 | 0.850800 | 0.524227 | 0.640532 |
| 2000 | 0.712200 | 0.468694 | 0.603454 |
| 2500 | 0.651200 | 0.413833 | 0.573025 |
| 3000 | 0.603100 | 0.403680 | 0.552847 |
| 3500 | 0.553300 | 0.372638 | 0.541719 |
| 4000 | 0.537200 | 0.353759 | 0.531191 |
| 4500 | 0.506300 | 0.359109 | 0.519601 |
| 5000 | 0.479600 | 0.343937 | 0.511336 |
| 5500 | 0.479800 | 0.338214 | 0.503948 |
| 6000 | 0.449500 | 0.332600 | 0.495221 |
| 6500 | 0.439200 | 0.323905 | 0.492635 |
| 7000 | 0.434900 | 0.310417 | 0.484555 |
| 7500 | 0.403200 | 0.311247 | 0.483262 |
| 8000 | 0.401500 | 0.295637 | 0.476566 |
| 8500 | 0.397000 | 0.301321 | 0.471672 |
| 9000 | 0.371600 | 0.295639 | 0.468440 |
| 9500 | 0.370700 | 0.294039 | 0.468902 |
| 10000 | 0.364900 | 0.291195 | 0.468440 |
| 10500 | 0.348300 | 0.284898 | 0.461098 |
| 11000 | 0.350100 | 0.281764 | 0.459805 |
| 11500 | 0.336900 | 0.291022 | 0.461606 |
| 12000 | 0.330700 | 0.280467 | 0.455234 |
| 12500 | 0.322500 | 0.271714 | 0.452694 |
| 13000 | 0.307400 | 0.289519 | 0.455465 |
| 13500 | 0.309300 | 0.281922 | 0.451217 |
| 14000 | 0.304800 | 0.271514 | 0.452186 |
| 14500 | 0.288100 | 0.286801 | 0.446830 |
| 15000 | 0.293200 | 0.276309 | 0.445399 |
| 15500 | 0.289800 | 0.287188 | 0.446230 |
| 16000 | 0.274800 | 0.286406 | 0.441243 |
| 16500 | 0.271700 | 0.284754 | 0.441520 |
| 17000 | 0.262500 | 0.275431 | 0.442167 |
| 17500 | 0.255500 | 0.276575 | 0.439858 |
| 18000 | 0.260200 | 0.269911 | 0.435425 |
| 18500 | 0.250600 | 0.270519 | 0.434686 |
| 19000 | 0.243300 | 0.267655 | 0.437826 |
| 19500 | 0.240600 | 0.277109 | 0.431731 |
| 20000 | 0.237200 | 0.266622 | 0.433994 |
| 20500 | 0.231300 | 0.273015 | 0.428868 |
| 21000 | 0.227200 | 0.263024 | 0.430161 |
| 21500 | 0.220400 | 0.272880 | 0.429607 |
| 22000 | 0.218600 | 0.272340 | 0.426883 |
| 22500 | 0.213100 | 0.277066 | 0.428407 |
| 23000 | 0.205000 | 0.278404 | 0.424020 |
| 23500 | 0.200900 | 0.270877 | 0.418987 |
| 24000 | 0.199000 | 0.289120 | 0.425821 |
| 24500 | 0.196100 | 0.275831 | 0.424066 |
| 25000 | 0.191100 | 0.282822 | 0.421850 |
| 25500 | 0.190100 | 0.275820 | 0.418248 |
| 26000 | 0.178800 | 0.279208 | 0.419125 |
| 26500 | 0.183100 | 0.271464 | 0.419218 |
| 27000 | 0.177400 | 0.280869 | 0.419680 |
| 27500 | 0.171800 | 0.279593 | 0.414924 |
| 28000 | 0.172900 | 0.276949 | 0.417648 |
| 28500 | 0.164900 | 0.283491 | 0.417786 |
| 29000 | 0.164800 | 0.283122 | 0.416078 |
| 29500 | 0.165500 | 0.281969 | 0.415801 |
| 30000 | 0.163800 | 0.283319 | 0.412753 |
| 30500 | 0.153500 | 0.285702 | 0.414046 |
| 31000 | 0.156500 | 0.285041 | 0.412615 |
| 31500 | 0.150900 | 0.284336 | 0.413723 |
| 32000 | 0.151800 | 0.285922 | 0.412292 |
| 32500 | 0.149200 | 0.289461 | 0.412153 |
| 33000 | 0.145400 | 0.291322 | 0.409567 |
| 33500 | 0.145600 | 0.294361 | 0.409614 |
| 34000 | 0.144200 | 0.290686 | 0.409059 |
| 34500 | 0.143400 | 0.289474 | 0.409844 |
| 35000 | 0.143500 | 0.290340 | 0.408367 |
| 35500 | 0.143200 | 0.289581 | 0.407351 |
| 36000 | 0.138400 | 0.292782 | 0.408736 |
| 36500 | 0.137900 | 0.289108 | 0.408044 |
| 37000 | 0.138200 | 0.292127 | 0.407166 |
| 37500 | 0.134600 | 0.291797 | 0.408413 |
| 38000 | 0.139800 | 0.290056 | 0.408090 |
| 38500 | 0.136500 | 0.291198 | 0.408090 |
| 39000 | 0.137700 | 0.289696 | 0.408044 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Kabyle_xlsr --dataset mozilla-foundation/common_voice_8_0 --config kab --split test
```
| {"language": ["kab"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Kabyle_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "kab"}, "metrics": [{"type": "wer", "value": 0.3188425282720088, "name": "Test WER"}, {"type": "cer", "value": 0.09443079928558358, "name": "Test CER"}]}]}]} | Akashpb13/Kabyle_xlsr | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sw",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"kab",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"kab"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Akashpb13/Kabyle\_xlsr
======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
* Loss: 0.159032
* Wer: 0.187934
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice Kabyle URL. Only 50,000 records were sampled randomly and trained due to huge size of dataset.
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000096
* train\_batch\_size: 8
* seed: 13
* gradient\_accumulation\_steps: 4
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 4\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 4\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
109,
112,
5,
47,
34
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 4\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
automatic-speech-recognition | transformers |
# Akashpb13/Swahili_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
- Loss: 0.159032
- Wer: 0.187934
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv and dev.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 4.810000 | 2.168847 | 0.995747 |
| 1000 | 0.564200 | 0.209411 | 0.303485 |
| 1500 | 0.217700 | 0.153959 | 0.239534 |
| 2000 | 0.150700 | 0.139901 | 0.216327 |
| 2500 | 0.119400 | 0.137543 | 0.208828 |
| 3000 | 0.099500 | 0.140921 | 0.203045 |
| 3500 | 0.087100 | 0.138835 | 0.199649 |
| 4000 | 0.074600 | 0.141297 | 0.195844 |
| 4500 | 0.066600 | 0.148560 | 0.194127 |
| 5000 | 0.060400 | 0.151214 | 0.194388 |
| 5500 | 0.054400 | 0.156072 | 0.192187 |
| 6000 | 0.051100 | 0.154726 | 0.190322 |
| 6500 | 0.048200 | 0.159847 | 0.189538 |
| 7000 | 0.046400 | 0.158727 | 0.188307 |
| 7500 | 0.046500 | 0.159032 | 0.187934 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Swahili_xlsr --dataset mozilla-foundation/common_voice_8_0 --config sw --split test
```
| {"language": ["sw"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sw"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Swahili_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sw"}, "metrics": [{"type": "wer", "value": 0.11763625454589981, "name": "Test WER"}, {"type": "cer", "value": 0.02884228669922436, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.11763625454589981, "name": "Test WER"}, {"type": "cer", "value": 0.02884228669922436, "name": "Test CER"}]}]}]} | Akashpb13/Swahili_xlsr | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sw",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sw"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Akashpb13/Swahili\_xlsr
=======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
* Loss: 0.159032
* Wer: 0.187934
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice Hausa URL and URL
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000096
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 2
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 80
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 80\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 80\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
110,
123,
5,
47,
34
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 80\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
automatic-speech-recognition | transformers |
# Akashpb13/xlsr_hungarian_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
- Loss: 0.197464
- Wer: 0.330094
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice hungarian train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000095637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 16
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 4.785300 | 0.952295 | 0.796236 |
| 1000 | 0.535800 | 0.217474 | 0.381613 |
| 1500 | 0.258400 | 0.205524 | 0.345056 |
| 2000 | 0.202800 | 0.198680 | 0.336264 |
| 2500 | 0.182700 | 0.197464 | 0.330094 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/xlsr_hungarian_new --dataset mozilla-foundation/common_voice_8_0 --config hu --split test
```
| {"language": ["hu"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/xlsr_hungarian_new", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hu"}, "metrics": [{"type": "wer", "value": 0.2851621517163838, "name": "Test WER"}, {"type": "cer", "value": 0.06112982522287432, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hu"}, "metrics": [{"type": "wer", "value": 0.2851621517163838, "name": "Test WER"}, {"type": "cer", "value": 0.06112982522287432, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "hu"}, "metrics": [{"type": "wer", "value": 47.15, "name": "Test WER"}]}]}]} | Akashpb13/xlsr_hungarian_new | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"hu",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hu"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Akashpb13/xlsr\_hungarian\_new
==============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
* Loss: 0.197464
* Wer: 0.330094
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice hungarian URL, URL, URL, URL, and URL
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000095637994662983496
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 16
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
102,
133,
5,
47,
34
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
automatic-speech-recognition | transformers |
# Akashpb13/xlsr_kurmanji_kurdish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.292389
- Wer: 0.388585
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Kurmanji Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 16
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.382500 | 3.183725 | 1.000000 |
| 400 | 2.870200 | 0.996664 | 0.781117 |
| 600 | 0.609900 | 0.333755 | 0.445052 |
| 800 | 0.326800 | 0.305729 | 0.403157 |
| 1000 | 0.255000 | 0.290734 | 0.391621 |
| 1200 | 0.226300 | 0.292389 | 0.388585 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.1
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/xlsr_kurmanji_kurdish --dataset mozilla-foundation/common_voice_8_0 --config kmr --split test
```
| {"language": ["kmr", "ku"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kmr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/xlsr_kurmanji_kurdish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.33073206986250464, "name": "Test WER"}, {"type": "cer", "value": 0.08035244447163924, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.33073206986250464, "name": "Test WER"}, {"type": "cer", "value": 0.08035244447163924, "name": "Test CER"}]}]}]} | Akashpb13/xlsr_kurmanji_kurdish | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"kmr",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"ku",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"kmr",
"ku"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Akashpb13/xlsr\_kurmanji\_kurdish
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
* Loss: 0.292389
* Wer: 0.388585
Model description
-----------------
"facebook/wav2vec2-xls-r-300m" was finetuned.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
Training data -
Common voice Kurmanji Kurdish URL, URL, URL, URL, and URL
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
Training procedure
------------------
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000096
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 16
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.18.1
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.1\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.1\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] | [
113,
123,
5,
47,
34
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.1\n* Tokenizers 0.10.3#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'"
] |
automatic-speech-recognition | transformers | # Wav2Vec2-Large-XLSR-53-Maltese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Maltese using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "Akashpb13/xlsr_maltese_wav2vec2"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Test Result**: 29.42 %
| {"language": "mt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Maltese by Akash PB", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice mt", "type": "common_voice", "args": {}}, "metrics": [{"type": "wer", "value": 29.42, "name": "Test WER"}]}]}]} | Akashpb13/xlsr_maltese_wav2vec2 | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mt",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"mt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
| # Wav2Vec2-Large-XLSR-53-Maltese
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
Test Result: 29.42 %
| [
"# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:\n\nTest Result: 29.42 %"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:\n\nTest Result: 29.42 %"
] | [
66,
58,
25
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.## Usage\nThe model can be used directly (without a language model) as follows:\n\nTest Result: 29.42 %"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Akjder/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
image-classification | transformers |
# BEiT for Face Mask Detection
BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Training Metrics
epoch = 0.55
total_flos = 576468516GF
train_loss = 0.151
train_runtime = 0:58:16.56
train_samples_per_second = 16.505
train_steps_per_second = 1.032
---
## Evaluation Metrics
epoch = 0.55
eval_accuracy = 0.975
eval_loss = 0.0803
eval_runtime = 0:03:13.02
eval_samples_per_second = 18.629
eval_steps_per_second = 2.331 | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | AkshatSurolia/BEiT-FaceMask-Finetuned | null | [
"transformers",
"pytorch",
"beit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BEiT for Face Mask Detection
BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Training Metrics
epoch = 0.55
total_flos = 576468516GF
train_loss = 0.151
train_runtime = 0:58:16.56
train_samples_per_second = 16.505
train_steps_per_second = 1.032
---
## Evaluation Metrics
epoch = 0.55
eval_accuracy = 0.975
eval_loss = 0.0803
eval_runtime = 0:03:13.02
eval_samples_per_second = 18.629
eval_steps_per_second = 2.331 | [
"# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.",
"## Model description\r\n\r\nThe BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.\r\n\r\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.",
"## Training Metrics\r\n epoch = 0.55\r\n total_flos = 576468516GF\r\n train_loss = 0.151\r\n train_runtime = 0:58:16.56\r\n train_samples_per_second = 16.505\r\n train_steps_per_second = 1.032\r\n\r\n---",
"## Evaluation Metrics\r\n epoch = 0.55\r\n eval_accuracy = 0.975\r\n eval_loss = 0.0803\r\n eval_runtime = 0:03:13.02\r\n eval_samples_per_second = 18.629\r\n eval_steps_per_second = 2.331"
] | [
"TAGS\n#transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.",
"## Model description\r\n\r\nThe BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.\r\n\r\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.",
"## Training Metrics\r\n epoch = 0.55\r\n total_flos = 576468516GF\r\n train_loss = 0.151\r\n train_runtime = 0:58:16.56\r\n train_samples_per_second = 16.505\r\n train_steps_per_second = 1.032\r\n\r\n---",
"## Evaluation Metrics\r\n epoch = 0.55\r\n eval_accuracy = 0.975\r\n eval_loss = 0.0803\r\n eval_runtime = 0:03:13.02\r\n eval_samples_per_second = 18.629\r\n eval_steps_per_second = 2.331"
] | [
45,
70,
385,
66,
67
] | [
"TAGS\n#transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.## Model description\r\n\r\nThe BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.\r\n\r\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.## Training Metrics\r\n epoch = 0.55\r\n total_flos = 576468516GF\r\n train_loss = 0.151\r\n train_runtime = 0:58:16.56\r\n train_samples_per_second = 16.505\r\n train_steps_per_second = 1.032\r\n\r\n---## Evaluation Metrics\r\n epoch = 0.55\r\n eval_accuracy = 0.975\r\n eval_loss = 0.0803\r\n eval_runtime = 0:03:13.02\r\n eval_samples_per_second = 18.629\r\n eval_steps_per_second = 2.331"
] |
image-classification | transformers |
# ConvNeXt for Face Mask Detection
ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.
## Training Metrics
epoch = 3.54
total_flos = 1195651761GF
train_loss = 0.0079
train_runtime = 1:08:20.25
train_samples_per_second = 14.075
train_steps_per_second = 0.22
---
## Evaluation Metrics
epoch = 3.54
eval_accuracy = 0.9961
eval_loss = 0.0151
eval_runtime = 0:01:23.47
eval_samples_per_second = 43.079
eval_steps_per_second = 5.391 | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | AkshatSurolia/ConvNeXt-FaceMask-Finetuned | null | [
"transformers",
"pytorch",
"safetensors",
"convnext",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# ConvNeXt for Face Mask Detection
ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.
## Training Metrics
epoch = 3.54
total_flos = 1195651761GF
train_loss = 0.0079
train_runtime = 1:08:20.25
train_samples_per_second = 14.075
train_steps_per_second = 0.22
---
## Evaluation Metrics
epoch = 3.54
eval_accuracy = 0.9961
eval_loss = 0.0151
eval_runtime = 0:01:23.47
eval_samples_per_second = 43.079
eval_steps_per_second = 5.391 | [
"# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.",
"## Training Metrics\r\n epoch = 3.54\r\n total_flos = 1195651761GF\r\n train_loss = 0.0079\r\n train_runtime = 1:08:20.25\r\n train_samples_per_second = 14.075\r\n train_steps_per_second = 0.22\r\n\r\n---",
"## Evaluation Metrics\r\n epoch = 3.54\r\n eval_accuracy = 0.9961\r\n eval_loss = 0.0151\r\n eval_runtime = 0:01:23.47\r\n eval_samples_per_second = 43.079\r\n eval_steps_per_second = 5.391"
] | [
"TAGS\n#transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.",
"## Training Metrics\r\n epoch = 3.54\r\n total_flos = 1195651761GF\r\n train_loss = 0.0079\r\n train_runtime = 1:08:20.25\r\n train_samples_per_second = 14.075\r\n train_steps_per_second = 0.22\r\n\r\n---",
"## Evaluation Metrics\r\n epoch = 3.54\r\n eval_accuracy = 0.9961\r\n eval_loss = 0.0151\r\n eval_runtime = 0:01:23.47\r\n eval_samples_per_second = 43.079\r\n eval_steps_per_second = 5.391"
] | [
56,
74,
69,
68
] | [
"TAGS\n#transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.## Training Metrics\r\n epoch = 3.54\r\n total_flos = 1195651761GF\r\n train_loss = 0.0079\r\n train_runtime = 1:08:20.25\r\n train_samples_per_second = 14.075\r\n train_steps_per_second = 0.22\r\n\r\n---## Evaluation Metrics\r\n epoch = 3.54\r\n eval_accuracy = 0.9961\r\n eval_loss = 0.0151\r\n eval_runtime = 0:01:23.47\r\n eval_samples_per_second = 43.079\r\n eval_steps_per_second = 5.391"
] |
image-classification | transformers |
# Distilled Data-efficient Image Transformer for Face Mask Detection
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Training Metrics
epoch = 2.0
total_flos = 2078245655GF
train_loss = 0.0438
train_runtime = 1:37:16.87
train_samples_per_second = 9.887
train_steps_per_second = 0.309
---
## Evaluation Metrics
epoch = 2.0
eval_accuracy = 0.9922
eval_loss = 0.0271
eval_runtime = 0:03:17.36
eval_samples_per_second = 18.22
eval_steps_per_second = 2.28 | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | AkshatSurolia/DeiT-FaceMask-Finetuned | null | [
"transformers",
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Distilled Data-efficient Image Transformer for Face Mask Detection
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Training Metrics
epoch = 2.0
total_flos = 2078245655GF
train_loss = 0.0438
train_runtime = 1:37:16.87
train_samples_per_second = 9.887
train_steps_per_second = 0.309
---
## Evaluation Metrics
epoch = 2.0
eval_accuracy = 0.9922
eval_loss = 0.0271
eval_runtime = 0:03:17.36
eval_samples_per_second = 18.22
eval_steps_per_second = 2.28 | [
"# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.",
"## Model description\r\n\r\nThis model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.",
"## Training Metrics\r\n epoch = 2.0\r\n total_flos = 2078245655GF\r\n train_loss = 0.0438\r\n train_runtime = 1:37:16.87\r\n train_samples_per_second = 9.887\r\n train_steps_per_second = 0.309\r\n\r\n---",
"## Evaluation Metrics\r\n epoch = 2.0\r\n eval_accuracy = 0.9922\r\n eval_loss = 0.0271\r\n eval_runtime = 0:03:17.36\r\n eval_samples_per_second = 18.22\r\n eval_steps_per_second = 2.28"
] | [
"TAGS\n#transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.",
"## Model description\r\n\r\nThis model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.",
"## Training Metrics\r\n epoch = 2.0\r\n total_flos = 2078245655GF\r\n train_loss = 0.0438\r\n train_runtime = 1:37:16.87\r\n train_samples_per_second = 9.887\r\n train_steps_per_second = 0.309\r\n\r\n---",
"## Evaluation Metrics\r\n epoch = 2.0\r\n eval_accuracy = 0.9922\r\n eval_loss = 0.0271\r\n eval_runtime = 0:03:17.36\r\n eval_samples_per_second = 18.22\r\n eval_steps_per_second = 2.28"
] | [
50,
89,
110,
67,
66
] | [
"TAGS\n#transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.## Model description\r\n\r\nThis model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.## Training Metrics\r\n epoch = 2.0\r\n total_flos = 2078245655GF\r\n train_loss = 0.0438\r\n train_runtime = 1:37:16.87\r\n train_samples_per_second = 9.887\r\n train_steps_per_second = 0.309\r\n\r\n---## Evaluation Metrics\r\n epoch = 2.0\r\n eval_accuracy = 0.9922\r\n eval_loss = 0.0271\r\n eval_runtime = 0:03:17.36\r\n eval_samples_per_second = 18.22\r\n eval_steps_per_second = 2.28"
] |
text-classification | transformers |
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
config = model.config
Run the model with clinical diagonosis text:
text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Return the Top-5 predicted ICD-10 codes:
results = output.logits.detach().cpu().numpy()[0].argsort()[::-1][:5]
return [ config.id2label[ids] for ids in results] | {"license": "apache-2.0", "tags": ["text-classification"]} | AkshatSurolia/ICD-10-Code-Prediction | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
config = URL
Run the model with clinical diagonosis text:
text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(encoded_input)
Return the Top-5 predicted ICD-10 codes:
results = URL().cpu().numpy()[0].argsort()[::-1][:5]
return [ config.id2label[ids] for ids in results] | [
"# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. \n \n---",
"## How to use the model\n\nLoad the model via the transformers library:\n\n from transformers import AutoTokenizer, BertForSequenceClassification\n tokenizer = AutoTokenizer.from_pretrained(\"AkshatSurolia/ICD-10-Code-Prediction\")\n model = BertForSequenceClassification.from_pretrained(\"AkshatSurolia/ICD-10-Code-Prediction\")\n config = URL\n\nRun the model with clinical diagonosis text:\n\n text = \"subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive\"\n encoded_input = tokenizer(text, return_tensors='pt')\n output = model(encoded_input)\n\nReturn the Top-5 predicted ICD-10 codes:\n\n results = URL().cpu().numpy()[0].argsort()[::-1][:5]\n return [ config.id2label[ids] for ids in results]"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. \n \n---",
"## How to use the model\n\nLoad the model via the transformers library:\n\n from transformers import AutoTokenizer, BertForSequenceClassification\n tokenizer = AutoTokenizer.from_pretrained(\"AkshatSurolia/ICD-10-Code-Prediction\")\n model = BertForSequenceClassification.from_pretrained(\"AkshatSurolia/ICD-10-Code-Prediction\")\n config = URL\n\nRun the model with clinical diagonosis text:\n\n text = \"subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive\"\n encoded_input = tokenizer(text, return_tensors='pt')\n output = model(encoded_input)\n\nReturn the Top-5 predicted ICD-10 codes:\n\n results = URL().cpu().numpy()[0].argsort()[::-1][:5]\n return [ config.id2label[ids] for ids in results]"
] | [
35,
89,
226
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us \n# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. \n \n---## How to use the model\n\nLoad the model via the transformers library:\n\n from transformers import AutoTokenizer, BertForSequenceClassification\n tokenizer = AutoTokenizer.from_pretrained(\"AkshatSurolia/ICD-10-Code-Prediction\")\n model = BertForSequenceClassification.from_pretrained(\"AkshatSurolia/ICD-10-Code-Prediction\")\n config = URL\n\nRun the model with clinical diagonosis text:\n\n text = \"subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive\"\n encoded_input = tokenizer(text, return_tensors='pt')\n output = model(encoded_input)\n\nReturn the Top-5 predicted ICD-10 codes:\n\n results = URL().cpu().numpy()[0].argsort()[::-1][:5]\n return [ config.id2label[ids] for ids in results]"
] |
image-classification | transformers |
# Vision Transformer (ViT) for Face Mask Detection
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Training Metrics
epoch = 0.89
total_flos = 923776502GF
train_loss = 0.057
train_runtime = 0:40:10.40
train_samples_per_second = 23.943
train_steps_per_second = 1.497
---
## Evaluation Metrics
epoch = 0.89
eval_accuracy = 0.9894
eval_loss = 0.0395
eval_runtime = 0:00:36.81
eval_samples_per_second = 97.685
eval_steps_per_second = 12.224 | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | AkshatSurolia/ViT-FaceMask-Finetuned | null | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Vision Transformer (ViT) for Face Mask Detection
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Training Metrics
epoch = 0.89
total_flos = 923776502GF
train_loss = 0.057
train_runtime = 0:40:10.40
train_samples_per_second = 23.943
train_steps_per_second = 1.497
---
## Evaluation Metrics
epoch = 0.89
eval_accuracy = 0.9894
eval_loss = 0.0395
eval_runtime = 0:00:36.81
eval_samples_per_second = 97.685
eval_steps_per_second = 12.224 | [
"# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. \r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.",
"## Model description\r\n\r\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\r\n\r\nNote that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).\r\n\r\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.",
"## Training Metrics\r\n epoch = 0.89\r\n total_flos = 923776502GF\r\n train_loss = 0.057\r\n train_runtime = 0:40:10.40\r\n train_samples_per_second = 23.943\r\n train_steps_per_second = 1.497\r\n---",
"## Evaluation Metrics\r\n epoch = 0.89\r\n eval_accuracy = 0.9894\r\n eval_loss = 0.0395\r\n eval_runtime = 0:00:36.81\r\n eval_samples_per_second = 97.685\r\n eval_steps_per_second = 12.224"
] | [
"TAGS\n#transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. \r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.",
"## Model description\r\n\r\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\r\n\r\nNote that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).\r\n\r\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.",
"## Training Metrics\r\n epoch = 0.89\r\n total_flos = 923776502GF\r\n train_loss = 0.057\r\n train_runtime = 0:40:10.40\r\n train_samples_per_second = 23.943\r\n train_steps_per_second = 1.497\r\n---",
"## Evaluation Metrics\r\n epoch = 0.89\r\n eval_accuracy = 0.9894\r\n eval_loss = 0.0395\r\n eval_runtime = 0:00:36.81\r\n eval_samples_per_second = 97.685\r\n eval_steps_per_second = 12.224"
] | [
50,
154,
270,
69,
68
] | [
"TAGS\n#transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. \r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.## Model description\r\n\r\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.\r\n\r\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\r\n\r\nNote that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).\r\n\r\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.## Training Metrics\r\n epoch = 0.89\r\n total_flos = 923776502GF\r\n train_loss = 0.057\r\n train_runtime = 0:40:10.40\r\n train_samples_per_second = 23.943\r\n train_steps_per_second = 1.497\r\n---## Evaluation Metrics\r\n epoch = 0.89\r\n eval_accuracy = 0.9894\r\n eval_loss = 0.0395\r\n eval_runtime = 0:00:36.81\r\n eval_samples_per_second = 97.685\r\n eval_steps_per_second = 12.224"
] |
null | null |
# Spoken Language Identification Model
## Model description
The model can classify a speech utterance according to the language spoken.
It covers following different languages (
English,
Indonesian,
Japanese,
Korean,
Thai,
Vietnamese,
Mandarin Chinese).
| {"language": "multilingual", "license": "apache-2.0", "tags": ["LID", "spoken language recognition"], "datasets": ["VoxLingua107"], "metrics": ["ER"], "inference": false} | AkshaySg/LanguageIdentification | null | [
"LID",
"spoken language recognition",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us
|
# Spoken Language Identification Model
## Model description
The model can classify a speech utterance according to the language spoken.
It covers following different languages (
English,
Indonesian,
Japanese,
Korean,
Thai,
Vietnamese,
Mandarin Chinese).
| [
"# Spoken Language Identification Model",
"## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEnglish, \r\nIndonesian, \r\nJapanese, \r\nKorean, \r\nThai, \r\nVietnamese, \r\nMandarin Chinese)."
] | [
"TAGS\n#LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n",
"# Spoken Language Identification Model",
"## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEnglish, \r\nIndonesian, \r\nJapanese, \r\nKorean, \r\nThai, \r\nVietnamese, \r\nMandarin Chinese)."
] | [
32,
5,
40
] | [
"TAGS\n#LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n# Spoken Language Identification Model## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEnglish, \r\nIndonesian, \r\nJapanese, \r\nKorean, \r\nThai, \r\nVietnamese, \r\nMandarin Chinese)."
] |
audio-classification | speechbrain |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508,
0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997,
0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256,
0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944,
0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950,
0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777,
0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193,
0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364,
0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017,
0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464,
0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838,
0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as cosine scores between
# the languages and the given utterance (i.e., the larger the better)
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
| {"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac"}]} | AkshaySg/langid | null | [
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us
|
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see here.
#### How to use
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on VoxLingua107.
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used SpeechBrain to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
| [
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model",
"## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.\n\nThe model can classify a speech utterance according to the language spoken.\nIt covers 107 different languages (\nAbkhazian, \nAfrikaans, \nAmharic, \nArabic, \nAssamese, \nAzerbaijani, \nBashkir, \nBelarusian, \nBulgarian, \nBengali, \nTibetan, \nBreton, \nBosnian, \nCatalan, \nCebuano, \nCzech, \nWelsh, \nDanish, \nGerman, \nGreek, \nEnglish, \nEsperanto, \nSpanish, \nEstonian, \nBasque, \nPersian, \nFinnish, \nFaroese, \nFrench, \nGalician, \nGuarani, \nGujarati, \nManx, \nHausa, \nHawaiian, \nHindi, \nCroatian, \nHaitian, \nHungarian, \nArmenian, \nInterlingua, \nIndonesian, \nIcelandic, \nItalian, \nHebrew, \nJapanese, \nJavanese, \nGeorgian, \nKazakh, \nCentral Khmer, \nKannada, \nKorean, \nLatin, \nLuxembourgish, \nLingala, \nLao, \nLithuanian, \nLatvian, \nMalagasy, \nMaori, \nMacedonian, \nMalayalam, \nMongolian, \nMarathi, \nMalay, \nMaltese, \nBurmese, \nNepali, \nDutch, \nNorwegian Nynorsk, \nNorwegian, \nOccitan, \nPanjabi, \nPolish, \nPushto, \nPortuguese, \nRomanian, \nRussian, \nSanskrit, \nScots, \nSindhi, \nSinhala, \nSlovak, \nSlovenian, \nShona, \nSomali, \nAlbanian, \nSerbian, \nSundanese, \nSwedish, \nSwahili, \nTamil, \nTelugu, \nTajik, \nThai, \nTurkmen, \nTagalog, \nTurkish, \nTatar, \nUkrainian, \nUrdu, \nUzbek, \nVietnamese, \nWaray, \nYiddish, \nYoruba, \nMandarin Chinese).",
"## Intended uses & limitations\n\nThe model has two uses:\n\n - use 'as is' for spoken language recognition\n - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data\n \nThe model is trained on automatically collected YouTube data. For more \ninformation about the dataset, see here.",
"#### How to use",
"#### Limitations and bias\n\nSince the model is trained on VoxLingua107, it has many limitations and biases, some of which are:\n\n - Probably it's accuracy on smaller languages is quite limited\n - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)\n - Based on subjective experiments, it doesn't work well on speech with a foreign accent\n - Probably it doesn't work well on children's speech and on persons with speech disorders",
"## Training data\n\nThe model is trained on VoxLingua107.\n\nVoxLingua107 is a speech dataset for training spoken language identification models. \nThe dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.\n\nVoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. \nThe average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.",
"## Training procedure\n\nWe used SpeechBrain to train the model.\nTraining recipe will be published soon.",
"## Evaluation results\n\nError rate: 7% on the development dataset",
"### BibTeX entry and citation info"
] | [
"TAGS\n#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n",
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model",
"## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.\n\nThe model can classify a speech utterance according to the language spoken.\nIt covers 107 different languages (\nAbkhazian, \nAfrikaans, \nAmharic, \nArabic, \nAssamese, \nAzerbaijani, \nBashkir, \nBelarusian, \nBulgarian, \nBengali, \nTibetan, \nBreton, \nBosnian, \nCatalan, \nCebuano, \nCzech, \nWelsh, \nDanish, \nGerman, \nGreek, \nEnglish, \nEsperanto, \nSpanish, \nEstonian, \nBasque, \nPersian, \nFinnish, \nFaroese, \nFrench, \nGalician, \nGuarani, \nGujarati, \nManx, \nHausa, \nHawaiian, \nHindi, \nCroatian, \nHaitian, \nHungarian, \nArmenian, \nInterlingua, \nIndonesian, \nIcelandic, \nItalian, \nHebrew, \nJapanese, \nJavanese, \nGeorgian, \nKazakh, \nCentral Khmer, \nKannada, \nKorean, \nLatin, \nLuxembourgish, \nLingala, \nLao, \nLithuanian, \nLatvian, \nMalagasy, \nMaori, \nMacedonian, \nMalayalam, \nMongolian, \nMarathi, \nMalay, \nMaltese, \nBurmese, \nNepali, \nDutch, \nNorwegian Nynorsk, \nNorwegian, \nOccitan, \nPanjabi, \nPolish, \nPushto, \nPortuguese, \nRomanian, \nRussian, \nSanskrit, \nScots, \nSindhi, \nSinhala, \nSlovak, \nSlovenian, \nShona, \nSomali, \nAlbanian, \nSerbian, \nSundanese, \nSwedish, \nSwahili, \nTamil, \nTelugu, \nTajik, \nThai, \nTurkmen, \nTagalog, \nTurkish, \nTatar, \nUkrainian, \nUrdu, \nUzbek, \nVietnamese, \nWaray, \nYiddish, \nYoruba, \nMandarin Chinese).",
"## Intended uses & limitations\n\nThe model has two uses:\n\n - use 'as is' for spoken language recognition\n - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data\n \nThe model is trained on automatically collected YouTube data. For more \ninformation about the dataset, see here.",
"#### How to use",
"#### Limitations and bias\n\nSince the model is trained on VoxLingua107, it has many limitations and biases, some of which are:\n\n - Probably it's accuracy on smaller languages is quite limited\n - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)\n - Based on subjective experiments, it doesn't work well on speech with a foreign accent\n - Probably it doesn't work well on children's speech and on persons with speech disorders",
"## Training data\n\nThe model is trained on VoxLingua107.\n\nVoxLingua107 is a speech dataset for training spoken language identification models. \nThe dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.\n\nVoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. \nThe average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.",
"## Training procedure\n\nWe used SpeechBrain to train the model.\nTraining recipe will be published soon.",
"## Evaluation results\n\nError rate: 7% on the development dataset",
"### BibTeX entry and citation info"
] | [
63,
15,
326,
71,
7,
102,
147,
21,
14,
10
] | [
"TAGS\n#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.\n\nThe model can classify a speech utterance according to the language spoken.\nIt covers 107 different languages (\nAbkhazian, \nAfrikaans, \nAmharic, \nArabic, \nAssamese, \nAzerbaijani, \nBashkir, \nBelarusian, \nBulgarian, \nBengali, \nTibetan, \nBreton, \nBosnian, \nCatalan, \nCebuano, \nCzech, \nWelsh, \nDanish, \nGerman, \nGreek, \nEnglish, \nEsperanto, \nSpanish, \nEstonian, \nBasque, \nPersian, \nFinnish, \nFaroese, \nFrench, \nGalician, \nGuarani, \nGujarati, \nManx, \nHausa, \nHawaiian, \nHindi, \nCroatian, \nHaitian, \nHungarian, \nArmenian, \nInterlingua, \nIndonesian, \nIcelandic, \nItalian, \nHebrew, \nJapanese, \nJavanese, \nGeorgian, \nKazakh, \nCentral Khmer, \nKannada, \nKorean, \nLatin, \nLuxembourgish, \nLingala, \nLao, \nLithuanian, \nLatvian, \nMalagasy, \nMaori, \nMacedonian, \nMalayalam, \nMongolian, \nMarathi, \nMalay, \nMaltese, \nBurmese, \nNepali, \nDutch, \nNorwegian Nynorsk, \nNorwegian, \nOccitan, \nPanjabi, \nPolish, \nPushto, \nPortuguese, \nRomanian, \nRussian, \nSanskrit, \nScots, \nSindhi, \nSinhala, \nSlovak, \nSlovenian, \nShona, \nSomali, \nAlbanian, \nSerbian, \nSundanese, \nSwedish, \nSwahili, \nTamil, \nTelugu, \nTajik, \nThai, \nTurkmen, \nTagalog, \nTurkish, \nTatar, \nUkrainian, \nUrdu, \nUzbek, \nVietnamese, \nWaray, \nYiddish, \nYoruba, \nMandarin Chinese).## Intended uses & limitations\n\nThe model has two uses:\n\n - use 'as is' for spoken language recognition\n - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data\n \nThe model is trained on automatically collected YouTube data. For more \ninformation about the dataset, see here.#### How to use#### Limitations and bias\n\nSince the model is trained on VoxLingua107, it has many limitations and biases, some of which are:\n\n - Probably it's accuracy on smaller languages is quite limited\n - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)\n - Based on subjective experiments, it doesn't work well on speech with a foreign accent\n - Probably it doesn't work well on children's speech and on persons with speech disorders## Training data\n\nThe model is trained on VoxLingua107.\n\nVoxLingua107 is a speech dataset for training spoken language identification models. \nThe dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.\n\nVoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. \nThe average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.## Training procedure\n\nWe used SpeechBrain to train the model.\nTraining recipe will be published soon.## Evaluation results\n\nError rate: 7% on the development dataset### BibTeX entry and citation info"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-base-cased-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "bert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | Aleksandar/bert-srb-base-cased-oscar | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# bert-srb-base-cased-oscar
This model is a fine-tuned version of [](URL on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| [
"# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] | [
34,
32,
7,
9,
9,
4,
93,
5,
40
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5### Training results### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1955
- Precision: 0.8229
- Recall: 0.8465
- F1: 0.8345
- Accuracy: 0.9645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2281 | 0.6589 | 0.7001 | 0.6789 | 0.9350 |
| No log | 2.0 | 208 | 0.1833 | 0.7105 | 0.7694 | 0.7388 | 0.9470 |
| No log | 3.0 | 312 | 0.1573 | 0.7461 | 0.7778 | 0.7616 | 0.9525 |
| No log | 4.0 | 416 | 0.1489 | 0.7665 | 0.8091 | 0.7872 | 0.9557 |
| 0.1898 | 5.0 | 520 | 0.1445 | 0.7881 | 0.8327 | 0.8098 | 0.9587 |
| 0.1898 | 6.0 | 624 | 0.1473 | 0.7913 | 0.8316 | 0.8109 | 0.9601 |
| 0.1898 | 7.0 | 728 | 0.1558 | 0.8101 | 0.8347 | 0.8222 | 0.9620 |
| 0.1898 | 8.0 | 832 | 0.1616 | 0.8026 | 0.8302 | 0.8162 | 0.9612 |
| 0.1898 | 9.0 | 936 | 0.1716 | 0.8127 | 0.8409 | 0.8266 | 0.9631 |
| 0.0393 | 10.0 | 1040 | 0.1751 | 0.8140 | 0.8369 | 0.8253 | 0.9628 |
| 0.0393 | 11.0 | 1144 | 0.1775 | 0.8096 | 0.8420 | 0.8255 | 0.9626 |
| 0.0393 | 12.0 | 1248 | 0.1763 | 0.8161 | 0.8386 | 0.8272 | 0.9636 |
| 0.0393 | 13.0 | 1352 | 0.1949 | 0.8259 | 0.8400 | 0.8329 | 0.9634 |
| 0.0393 | 14.0 | 1456 | 0.1842 | 0.8205 | 0.8420 | 0.8311 | 0.9642 |
| 0.0111 | 15.0 | 1560 | 0.1862 | 0.8160 | 0.8493 | 0.8323 | 0.9646 |
| 0.0111 | 16.0 | 1664 | 0.1989 | 0.8176 | 0.8367 | 0.8270 | 0.9627 |
| 0.0111 | 17.0 | 1768 | 0.1945 | 0.8246 | 0.8409 | 0.8327 | 0.9638 |
| 0.0111 | 18.0 | 1872 | 0.1997 | 0.8270 | 0.8426 | 0.8347 | 0.9634 |
| 0.0111 | 19.0 | 1976 | 0.1917 | 0.8258 | 0.8491 | 0.8373 | 0.9651 |
| 0.0051 | 20.0 | 2080 | 0.1955 | 0.8229 | 0.8465 | 0.8345 | 0.9645 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9645112274185379}}]}]} | Aleksandar/bert-srb-ner-setimes | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| bert-srb-ner-setimes
====================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1955
* Precision: 0.8229
* Recall: 0.8465
* F1: 0.8345
* Accuracy: 0.9645
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0
* Datasets 1.11.0
* Tokenizers 0.10.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
34,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3561
- Precision: 0.8909
- Recall: 0.9082
- F1: 0.8995
- Accuracy: 0.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3907 | 1.0 | 625 | 0.2316 | 0.8255 | 0.8314 | 0.8285 | 0.9259 |
| 0.2091 | 2.0 | 1250 | 0.1920 | 0.8598 | 0.8731 | 0.8664 | 0.9420 |
| 0.1562 | 3.0 | 1875 | 0.1833 | 0.8608 | 0.8820 | 0.8713 | 0.9441 |
| 0.0919 | 4.0 | 2500 | 0.1985 | 0.8712 | 0.8886 | 0.8798 | 0.9476 |
| 0.0625 | 5.0 | 3125 | 0.2195 | 0.8762 | 0.8923 | 0.8842 | 0.9485 |
| 0.0545 | 6.0 | 3750 | 0.2320 | 0.8706 | 0.9004 | 0.8852 | 0.9495 |
| 0.0403 | 7.0 | 4375 | 0.2459 | 0.8817 | 0.8957 | 0.8887 | 0.9505 |
| 0.0269 | 8.0 | 5000 | 0.2603 | 0.8813 | 0.9021 | 0.8916 | 0.9516 |
| 0.0193 | 9.0 | 5625 | 0.2916 | 0.8812 | 0.8949 | 0.8880 | 0.9500 |
| 0.0162 | 10.0 | 6250 | 0.2938 | 0.8814 | 0.9025 | 0.8918 | 0.9520 |
| 0.0134 | 11.0 | 6875 | 0.3330 | 0.8809 | 0.8961 | 0.8885 | 0.9497 |
| 0.0076 | 12.0 | 7500 | 0.3141 | 0.8840 | 0.9025 | 0.8932 | 0.9524 |
| 0.0069 | 13.0 | 8125 | 0.3292 | 0.8819 | 0.9065 | 0.8940 | 0.9535 |
| 0.0053 | 14.0 | 8750 | 0.3454 | 0.8844 | 0.9018 | 0.8930 | 0.9523 |
| 0.0038 | 15.0 | 9375 | 0.3519 | 0.8912 | 0.9061 | 0.8986 | 0.9539 |
| 0.0034 | 16.0 | 10000 | 0.3437 | 0.8894 | 0.9038 | 0.8965 | 0.9539 |
| 0.0024 | 17.0 | 10625 | 0.3518 | 0.8896 | 0.9072 | 0.8983 | 0.9543 |
| 0.0018 | 18.0 | 11250 | 0.3572 | 0.8877 | 0.9072 | 0.8973 | 0.9543 |
| 0.0015 | 19.0 | 11875 | 0.3554 | 0.8910 | 0.9081 | 0.8994 | 0.9549 |
| 0.0011 | 20.0 | 12500 | 0.3561 | 0.8909 | 0.9082 | 0.8995 | 0.9547 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9546696220907545}}]}]} | Aleksandar/bert-srb-ner | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
| bert-srb-ner
============
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3561
* Precision: 0.8909
* Recall: 0.9082
* F1: 0.8995
* Accuracy: 0.9547
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0
* Datasets 1.11.0
* Tokenizers 0.10.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
45,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-base-cased-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | Aleksandar/distilbert-srb-base-cased-oscar | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-srb-base-cased-oscar
This model is a fine-tuned version of [](URL on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| [
"# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] | [
36,
34,
7,
9,
9,
4,
93,
5,
40
] | [
"TAGS\n#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5### Training results### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1838
- Precision: 0.8370
- Recall: 0.8617
- F1: 0.8492
- Accuracy: 0.9665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2319 | 0.6668 | 0.7029 | 0.6844 | 0.9358 |
| No log | 2.0 | 208 | 0.1850 | 0.7265 | 0.7508 | 0.7385 | 0.9469 |
| No log | 3.0 | 312 | 0.1584 | 0.7555 | 0.7937 | 0.7741 | 0.9538 |
| No log | 4.0 | 416 | 0.1484 | 0.7644 | 0.8128 | 0.7879 | 0.9571 |
| 0.1939 | 5.0 | 520 | 0.1383 | 0.7850 | 0.8131 | 0.7988 | 0.9604 |
| 0.1939 | 6.0 | 624 | 0.1409 | 0.7914 | 0.8359 | 0.8130 | 0.9632 |
| 0.1939 | 7.0 | 728 | 0.1526 | 0.8176 | 0.8392 | 0.8283 | 0.9637 |
| 0.1939 | 8.0 | 832 | 0.1536 | 0.8195 | 0.8409 | 0.8301 | 0.9641 |
| 0.1939 | 9.0 | 936 | 0.1538 | 0.8242 | 0.8523 | 0.8380 | 0.9661 |
| 0.0364 | 10.0 | 1040 | 0.1612 | 0.8228 | 0.8413 | 0.8319 | 0.9652 |
| 0.0364 | 11.0 | 1144 | 0.1721 | 0.8289 | 0.8503 | 0.8395 | 0.9656 |
| 0.0364 | 12.0 | 1248 | 0.1645 | 0.8301 | 0.8590 | 0.8443 | 0.9663 |
| 0.0364 | 13.0 | 1352 | 0.1747 | 0.8352 | 0.8540 | 0.8445 | 0.9665 |
| 0.0364 | 14.0 | 1456 | 0.1703 | 0.8277 | 0.8573 | 0.8422 | 0.9663 |
| 0.011 | 15.0 | 1560 | 0.1770 | 0.8314 | 0.8624 | 0.8466 | 0.9665 |
| 0.011 | 16.0 | 1664 | 0.1903 | 0.8399 | 0.8537 | 0.8467 | 0.9661 |
| 0.011 | 17.0 | 1768 | 0.1837 | 0.8363 | 0.8590 | 0.8475 | 0.9665 |
| 0.011 | 18.0 | 1872 | 0.1820 | 0.8338 | 0.8570 | 0.8453 | 0.9667 |
| 0.011 | 19.0 | 1976 | 0.1855 | 0.8382 | 0.8620 | 0.8499 | 0.9666 |
| 0.0053 | 20.0 | 2080 | 0.1838 | 0.8370 | 0.8617 | 0.8492 | 0.9665 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9665376552169005}}]}]} | Aleksandar/distilbert-srb-ner-setimes | null | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| distilbert-srb-ner-setimes
==========================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1838
* Precision: 0.8370
* Recall: 0.8617
* F1: 0.8492
* Accuracy: 0.9665
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0
* Datasets 1.11.0
* Tokenizers 0.10.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
40,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2972
- Precision: 0.8871
- Recall: 0.9100
- F1: 0.8984
- Accuracy: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3818 | 1.0 | 625 | 0.2175 | 0.8175 | 0.8370 | 0.8272 | 0.9306 |
| 0.198 | 2.0 | 1250 | 0.1766 | 0.8551 | 0.8732 | 0.8640 | 0.9458 |
| 0.1423 | 3.0 | 1875 | 0.1702 | 0.8597 | 0.8763 | 0.8679 | 0.9473 |
| 0.079 | 4.0 | 2500 | 0.1774 | 0.8674 | 0.8875 | 0.8773 | 0.9515 |
| 0.0531 | 5.0 | 3125 | 0.2011 | 0.8688 | 0.8965 | 0.8825 | 0.9522 |
| 0.0429 | 6.0 | 3750 | 0.2082 | 0.8769 | 0.8970 | 0.8868 | 0.9538 |
| 0.032 | 7.0 | 4375 | 0.2268 | 0.8764 | 0.8916 | 0.8839 | 0.9528 |
| 0.0204 | 8.0 | 5000 | 0.2423 | 0.8726 | 0.8959 | 0.8841 | 0.9529 |
| 0.0148 | 9.0 | 5625 | 0.2522 | 0.8774 | 0.8991 | 0.8881 | 0.9538 |
| 0.0125 | 10.0 | 6250 | 0.2544 | 0.8823 | 0.9024 | 0.8922 | 0.9559 |
| 0.0108 | 11.0 | 6875 | 0.2592 | 0.8780 | 0.9041 | 0.8909 | 0.9553 |
| 0.007 | 12.0 | 7500 | 0.2672 | 0.8877 | 0.9056 | 0.8965 | 0.9571 |
| 0.0048 | 13.0 | 8125 | 0.2714 | 0.8879 | 0.9089 | 0.8982 | 0.9583 |
| 0.0049 | 14.0 | 8750 | 0.2872 | 0.8873 | 0.9068 | 0.8970 | 0.9573 |
| 0.0034 | 15.0 | 9375 | 0.2915 | 0.8883 | 0.9114 | 0.8997 | 0.9577 |
| 0.0027 | 16.0 | 10000 | 0.2890 | 0.8865 | 0.9103 | 0.8983 | 0.9581 |
| 0.0028 | 17.0 | 10625 | 0.2885 | 0.8877 | 0.9085 | 0.8980 | 0.9576 |
| 0.0014 | 18.0 | 11250 | 0.2928 | 0.8860 | 0.9073 | 0.8965 | 0.9577 |
| 0.0013 | 19.0 | 11875 | 0.2963 | 0.8856 | 0.9099 | 0.8976 | 0.9576 |
| 0.001 | 20.0 | 12500 | 0.2972 | 0.8871 | 0.9100 | 0.8984 | 0.9577 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"language": ["sr"], "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9576561462374611}}]}]} | Aleksandar/distilbert-srb-ner | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"sr",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sr"
] | TAGS
#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
| distilbert-srb-ner
==================
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2972
* Precision: 0.8871
* Recall: 0.9100
* F1: 0.8984
* Accuracy: 0.9577
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0
* Datasets 1.11.0
* Tokenizers 0.10.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
45,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2804
- Precision: 0.8286
- Recall: 0.8081
- F1: 0.8182
- Accuracy: 0.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2981 | 0.6737 | 0.6113 | 0.6410 | 0.9174 |
| No log | 2.0 | 208 | 0.2355 | 0.7279 | 0.6701 | 0.6978 | 0.9307 |
| No log | 3.0 | 312 | 0.2079 | 0.7707 | 0.7062 | 0.7371 | 0.9402 |
| No log | 4.0 | 416 | 0.2078 | 0.7689 | 0.7479 | 0.7582 | 0.9449 |
| 0.2391 | 5.0 | 520 | 0.2089 | 0.8083 | 0.7476 | 0.7767 | 0.9484 |
| 0.2391 | 6.0 | 624 | 0.2199 | 0.7981 | 0.7726 | 0.7851 | 0.9487 |
| 0.2391 | 7.0 | 728 | 0.2528 | 0.8205 | 0.7749 | 0.7971 | 0.9511 |
| 0.2391 | 8.0 | 832 | 0.2265 | 0.8074 | 0.8003 | 0.8038 | 0.9524 |
| 0.2391 | 9.0 | 936 | 0.2843 | 0.8265 | 0.7716 | 0.7981 | 0.9504 |
| 0.0378 | 10.0 | 1040 | 0.2450 | 0.8024 | 0.8019 | 0.8021 | 0.9520 |
| 0.0378 | 11.0 | 1144 | 0.2550 | 0.8116 | 0.7986 | 0.8051 | 0.9519 |
| 0.0378 | 12.0 | 1248 | 0.2706 | 0.8208 | 0.7957 | 0.8081 | 0.9532 |
| 0.0378 | 13.0 | 1352 | 0.2664 | 0.8040 | 0.8035 | 0.8038 | 0.9530 |
| 0.0378 | 14.0 | 1456 | 0.2571 | 0.8011 | 0.8110 | 0.8060 | 0.9529 |
| 0.0099 | 15.0 | 1560 | 0.2673 | 0.8051 | 0.8129 | 0.8090 | 0.9534 |
| 0.0099 | 16.0 | 1664 | 0.2733 | 0.8074 | 0.8087 | 0.8081 | 0.9529 |
| 0.0099 | 17.0 | 1768 | 0.2835 | 0.8254 | 0.8074 | 0.8163 | 0.9543 |
| 0.0099 | 18.0 | 1872 | 0.2771 | 0.8222 | 0.8081 | 0.8151 | 0.9545 |
| 0.0099 | 19.0 | 1976 | 0.2776 | 0.8237 | 0.8084 | 0.8160 | 0.9546 |
| 0.0044 | 20.0 | 2080 | 0.2804 | 0.8286 | 0.8081 | 0.8182 | 0.9547 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9546789604788638}}]}]} | Aleksandar/electra-srb-ner-setimes | null | [
"transformers",
"pytorch",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| electra-srb-ner-setimes
=======================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2804
* Precision: 0.8286
* Recall: 0.8081
* F1: 0.8182
* Accuracy: 0.9547
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0
* Datasets 1.11.0
* Tokenizers 0.10.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
39,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3406
- Precision: 0.8934
- Recall: 0.9087
- F1: 0.9010
- Accuracy: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3686 | 1.0 | 625 | 0.2108 | 0.8326 | 0.8494 | 0.8409 | 0.9335 |
| 0.1886 | 2.0 | 1250 | 0.1784 | 0.8737 | 0.8713 | 0.8725 | 0.9456 |
| 0.1323 | 3.0 | 1875 | 0.1805 | 0.8654 | 0.8870 | 0.8760 | 0.9468 |
| 0.0675 | 4.0 | 2500 | 0.2018 | 0.8736 | 0.8880 | 0.8807 | 0.9502 |
| 0.0425 | 5.0 | 3125 | 0.2162 | 0.8818 | 0.8945 | 0.8881 | 0.9512 |
| 0.0343 | 6.0 | 3750 | 0.2492 | 0.8790 | 0.8928 | 0.8859 | 0.9513 |
| 0.0253 | 7.0 | 4375 | 0.2562 | 0.8821 | 0.9006 | 0.8912 | 0.9525 |
| 0.0142 | 8.0 | 5000 | 0.2788 | 0.8807 | 0.9013 | 0.8909 | 0.9524 |
| 0.0114 | 9.0 | 5625 | 0.2793 | 0.8861 | 0.9002 | 0.8931 | 0.9534 |
| 0.0095 | 10.0 | 6250 | 0.2967 | 0.8887 | 0.9034 | 0.8960 | 0.9550 |
| 0.008 | 11.0 | 6875 | 0.2993 | 0.8899 | 0.9067 | 0.8982 | 0.9556 |
| 0.0048 | 12.0 | 7500 | 0.3215 | 0.8887 | 0.9038 | 0.8962 | 0.9545 |
| 0.0034 | 13.0 | 8125 | 0.3242 | 0.8897 | 0.9068 | 0.8982 | 0.9554 |
| 0.003 | 14.0 | 8750 | 0.3311 | 0.8884 | 0.9085 | 0.8983 | 0.9559 |
| 0.0025 | 15.0 | 9375 | 0.3383 | 0.8943 | 0.9062 | 0.9002 | 0.9562 |
| 0.0011 | 16.0 | 10000 | 0.3346 | 0.8941 | 0.9112 | 0.9026 | 0.9574 |
| 0.0015 | 17.0 | 10625 | 0.3362 | 0.8944 | 0.9081 | 0.9012 | 0.9567 |
| 0.001 | 18.0 | 11250 | 0.3464 | 0.8877 | 0.9100 | 0.8987 | 0.9559 |
| 0.0012 | 19.0 | 11875 | 0.3415 | 0.8944 | 0.9089 | 0.9016 | 0.9568 |
| 0.0005 | 20.0 | 12500 | 0.3406 | 0.8934 | 0.9087 | 0.9010 | 0.9568 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9568394937134688}}]}]} | Aleksandar/electra-srb-ner | null | [
"transformers",
"pytorch",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
| electra-srb-ner
===============
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3406
* Precision: 0.8934
* Recall: 0.9087
* F1: 0.9010
* Accuracy: 0.9568
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0
* Datasets 1.11.0
* Tokenizers 0.10.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] | [
46,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.1"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "electra-srb-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | Aleksandar/electra-srb-oscar | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# electra-srb-oscar
This model is a fine-tuned version of [](URL on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
| [
"# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] | [
35,
28,
7,
9,
9,
4,
93,
5,
40
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5### Training results### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0\n- Datasets 1.11.0\n- Tokenizers 0.10.1"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# herbert-base-cased-finetuned-squad
This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 233 | 1.2474 |
| No log | 2.0 | 466 | 1.1951 |
| 1.3459 | 3.0 | 699 | 1.2071 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "herbert-base-cased-finetuned-squad", "results": []}]} | Aleksandra/herbert-base-cased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us
| herbert-base-cased-finetuned-squad
==================================
This model is a fine-tuned version of allegro/herbert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2071
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
42,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
# xlm-roberta-en-ru-emoji
- Problem type: Multi-class Classification | {"language": ["en", "ru"], "datasets": ["tweet_eval"], "model_index": [{"name": "xlm-roberta-en-ru-emoji", "results": [{"task": {"name": "Sentiment Analysis", "type": "sentiment-analysis"}, "dataset": {"name": "Tweet Eval", "type": "tweet_eval", "args": "emoji"}}]}], "widget": [{"text": "\u041e\u0442\u043b\u0438\u0447\u043d\u043e!"}, {"text": "Awesome!"}, {"text": "lol"}]} | adorkin/xlm-roberta-en-ru-emoji | null | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:tweet_eval",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ru"
] | TAGS
#transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-roberta-en-ru-emoji
- Problem type: Multi-class Classification | [
"# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification"
] | [
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification"
] | [
49,
21
] | [
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us \n# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5316
- Accuracy: 0.2936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5355 | 1.0 | 6195 | 1.5339 | 0.2923 |
| 1.5248 | 2.0 | 12390 | 1.5316 | 0.2936 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "bert", "results": []}]} | AlekseyKorshuk/bert | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert
====
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5316
* Accuracy: 0.2936
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.20.1
* Pytorch 1.10.1+cu113
* Datasets 2.3.2
* Tokenizers 0.12.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.10.1+cu113\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.10.1+cu113\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] | [
44,
146,
5,
44
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.10.1+cu113\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] |
text2text-generation | transformers | **Usage HuggingFace Transformers for header generation task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-HeaderGeneration")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True,
truncation=True, padding='longest', return_tensors='pt')
input_ids = input_['input_ids']
input_mask = input_['attention_mask']
headers = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
headers = tokenizer.batch_decode(headers, skip_special_tokens=True)
```
**Decoder configuration examples:**
[**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105)
```
headers = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=20)
tokenizer.batch_decode(headers, skip_special_tokens=True)
```
output:
1. *the impact of climate change on tropical cyclones*
2. *the impact of human induced climate change on tropical cyclones*
3. *the impact of climate change on tropical cyclone formation in the midlatitudes*
4. *how climate change will expand the range of tropical cyclones?*
5. *the impact of climate change on tropical cyclones in the midlatitudes*
6. *global warming will expand the range of tropical cyclones*
7. *climate change will expand the range of tropical cyclones*
8. *the impact of climate change on tropical cyclone formation*
9. *the impact of human induced climate change on tropical cyclone formation*
10. *tropical cyclones in the mid-latitudes*
11. *climate change will expand the range of tropical cyclones in the middle latitudes*
12. *global warming will expand the range of tropical cyclones, a new study says*
13. *the impacts of climate change on tropical cyclones*
14. *the impact of global warming on tropical cyclones*
15. *climate change will expand the range of tropical cyclones, a new study says*
16. *global warming will expand the range of tropical cyclones in the middle latitudes*
17. *the effects of climate change on tropical cyclones*
18. *how climate change will expand the range of tropical cyclones*
19. *climate change will expand the range of tropical cyclones over the equator*
20. *the impact of human induced climate change on tropical cyclones.*
Also you can play with the following parameters in generate method:
-top_k
-top_p
[**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate) | {} | AlekseyKulnevich/Pegasus-HeaderGeneration | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| Usage HuggingFace Transformers for header generation task
Decoder configuration examples:
Input text you can see here
output:
1. *the impact of climate change on tropical cyclones*
2. *the impact of human induced climate change on tropical cyclones*
3. *the impact of climate change on tropical cyclone formation in the midlatitudes*
4. *how climate change will expand the range of tropical cyclones?*
5. *the impact of climate change on tropical cyclones in the midlatitudes*
6. *global warming will expand the range of tropical cyclones*
7. *climate change will expand the range of tropical cyclones*
8. *the impact of climate change on tropical cyclone formation*
9. *the impact of human induced climate change on tropical cyclone formation*
10. *tropical cyclones in the mid-latitudes*
11. *climate change will expand the range of tropical cyclones in the middle latitudes*
12. *global warming will expand the range of tropical cyclones, a new study says*
13. *the impacts of climate change on tropical cyclones*
14. *the impact of global warming on tropical cyclones*
15. *climate change will expand the range of tropical cyclones, a new study says*
16. *global warming will expand the range of tropical cyclones in the middle latitudes*
17. *the effects of climate change on tropical cyclones*
18. *how climate change will expand the range of tropical cyclones*
19. *climate change will expand the range of tropical cyclones over the equator*
20. *the impact of human induced climate change on tropical cyclones.*
Also you can play with the following parameters in generate method:
-top_k
-top_p
Meaning of parameters to generate text you can see here | [] | [
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
30
] | [
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |