asier-gutierrez
commited on
Commit
•
927a509
1
Parent(s):
74b3b18
Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ widget:
|
|
21 |
|
22 |
---
|
23 |
|
24 |
-
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
|
25 |
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
|
26 |
|
27 |
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
|
|
|
21 |
|
22 |
---
|
23 |
|
24 |
+
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
|
25 |
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
|
26 |
|
27 |
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
|