Asier Gutiérrez Fandiño
commited on
Commit
•
97db817
1
Parent(s):
03e974c
Initial commit
Browse files- README.md +165 -0
- args.json +24 -0
- config.json +25 -0
- dict.txt +0 -0
- merges.txt +0 -0
- process.log +8 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- es
|
4 |
+
tags:
|
5 |
+
- biomedical
|
6 |
+
- clinical
|
7 |
+
- spanish
|
8 |
+
license: apache-2.0
|
9 |
+
metrics:
|
10 |
+
- ppl
|
11 |
+
widget:
|
12 |
+
- text: "El único antecedente personal a reseñar era la <mask> arterial."
|
13 |
+
- text: "Las radiologías óseas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales."
|
14 |
+
- text: "En el <mask> toraco-abdómino-pélvico no se encontraron hallazgos patológicos de interés."
|
15 |
+
---
|
16 |
+
|
17 |
+
# Biomedical-clinical language model for Spanish
|
18 |
+
Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-SANIDAD/lm-biomedical-clinical-es) and read our [preprint](https://arxiv.org/abs/2109.03570) "_Carrino, C. P., Armengol-Estapé, J., Gutiérrez-Fandiño, A., Llop-Palao, J., Pàmies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario._".
|
19 |
+
|
20 |
+
## Tokenization and model pretraining
|
21 |
+
This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a
|
22 |
+
**biomedical-clinical** corpus in Spanish collected from several sources (see next section).
|
23 |
+
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
|
24 |
+
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences.
|
25 |
+
|
26 |
+
## Training corpora and preprocessing
|
27 |
+
|
28 |
+
The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are:
|
29 |
+
|
30 |
+
- data parsing in different formats
|
31 |
+
- sentence splitting
|
32 |
+
- language detection
|
33 |
+
- filtering of ill-formed sentences
|
34 |
+
- deduplication of repetitive contents
|
35 |
+
- keep the original document boundaries
|
36 |
+
|
37 |
+
Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied.
|
38 |
+
Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora:
|
39 |
+
|
40 |
+
|
41 |
+
| Name | No. tokens | Description |
|
42 |
+
|-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
43 |
+
| [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. |
|
44 |
+
| Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. |
|
45 |
+
| Clinical notes/documents | 91,250,080 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. |
|
46 |
+
| [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. |
|
47 |
+
| [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. |
|
48 |
+
| Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. |
|
49 |
+
| Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". |
|
50 |
+
| [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. |
|
51 |
+
| [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. |
|
52 |
+
| PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. |
|
53 |
+
|
54 |
+
|
55 |
+
|
56 |
+
## Evaluation and results
|
57 |
+
|
58 |
+
The model has been evaluated on the Named Entity Recognition (NER) using the following datasets:
|
59 |
+
|
60 |
+
- [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/).
|
61 |
+
|
62 |
+
- [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ).
|
63 |
+
|
64 |
+
- ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables.
|
65 |
+
|
66 |
+
The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models:
|
67 |
+
|
68 |
+
| F1 - Precision - Recall | roberta-base-biomedical-clinical-es | mBERT | BETO |
|
69 |
+
|---------------------------|----------------------------|-------------------------------|-------------------------|
|
70 |
+
| PharmaCoNER | **90.04** - **88.92** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 |
|
71 |
+
| CANTEMIST | **83.34** - **81.48** - **85.30** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 |
|
72 |
+
| ICTUSnet | **88.08** - **84.92** - **91.50** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 |
|
73 |
+
|
74 |
+
|
75 |
+
## Intended uses & limitations
|
76 |
+
|
77 |
+
The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section)
|
78 |
+
|
79 |
+
However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification.
|
80 |
+
|
81 |
+
## Cite
|
82 |
+
If you use our models, please cite our latest preprint:
|
83 |
+
|
84 |
+
```bibtex
|
85 |
+
|
86 |
+
@misc{carrino2021biomedical,
|
87 |
+
title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario},
|
88 |
+
author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Asier Gutiérrez-Fandiño and Joan Llop-Palao and Marc Pàmies and Aitor Gonzalez-Agirre and Marta Villegas},
|
89 |
+
year={2021},
|
90 |
+
eprint={2109.03570},
|
91 |
+
archivePrefix={arXiv},
|
92 |
+
primaryClass={cs.CL}
|
93 |
+
}
|
94 |
+
|
95 |
+
```
|
96 |
+
|
97 |
+
If you use our Medical Crawler corpus, please cite the preprint:
|
98 |
+
|
99 |
+
```bibtex
|
100 |
+
|
101 |
+
@misc{carrino2021spanish,
|
102 |
+
title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models},
|
103 |
+
author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas},
|
104 |
+
year={2021},
|
105 |
+
eprint={2109.07765},
|
106 |
+
archivePrefix={arXiv},
|
107 |
+
primaryClass={cs.CL}
|
108 |
+
}
|
109 |
+
|
110 |
+
```
|
111 |
+
|
112 |
+
---
|
113 |
+
|
114 |
+
---
|
115 |
+
|
116 |
+
## How to use
|
117 |
+
|
118 |
+
```python
|
119 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
120 |
+
|
121 |
+
tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es")
|
122 |
+
|
123 |
+
model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es")
|
124 |
+
|
125 |
+
from transformers import pipeline
|
126 |
+
|
127 |
+
unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es")
|
128 |
+
|
129 |
+
unmasker("El único antecedente personal a reseñar era la <mask> arterial.")
|
130 |
+
```
|
131 |
+
```
|
132 |
+
# Output
|
133 |
+
[
|
134 |
+
{
|
135 |
+
"sequence": " El único antecedente personal a reseñar era la hipertensión arterial.",
|
136 |
+
"score": 0.9855039715766907,
|
137 |
+
"token": 3529,
|
138 |
+
"token_str": " hipertensión"
|
139 |
+
},
|
140 |
+
{
|
141 |
+
"sequence": " El único antecedente personal a reseñar era la diabetes arterial.",
|
142 |
+
"score": 0.0039140828885138035,
|
143 |
+
"token": 1945,
|
144 |
+
"token_str": " diabetes"
|
145 |
+
},
|
146 |
+
{
|
147 |
+
"sequence": " El único antecedente personal a reseñar era la hipotensión arterial.",
|
148 |
+
"score": 0.002484665485098958,
|
149 |
+
"token": 11483,
|
150 |
+
"token_str": " hipotensión"
|
151 |
+
},
|
152 |
+
{
|
153 |
+
"sequence": " El único antecedente personal a reseñar era la Hipertensión arterial.",
|
154 |
+
"score": 0.0023484621196985245,
|
155 |
+
"token": 12238,
|
156 |
+
"token_str": " Hipertensión"
|
157 |
+
},
|
158 |
+
{
|
159 |
+
"sequence": " El único antecedente personal a reseñar era la presión arterial.",
|
160 |
+
"score": 0.0008009297889657319,
|
161 |
+
"token": 2267,
|
162 |
+
"token_str": " presión"
|
163 |
+
}
|
164 |
+
]
|
165 |
+
```
|
args.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"custom_vocab_files": [
|
3 |
+
"/home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/corpora/bio/biomedical-clinical.txt"
|
4 |
+
],
|
5 |
+
"vocab_name": "bio-biomedical-clinical-vocab-52k",
|
6 |
+
"tokenizer": "bbpe",
|
7 |
+
"lowercase": false,
|
8 |
+
"vocab_size": 52000,
|
9 |
+
"min_frequency": 10,
|
10 |
+
"extra_tokens": [],
|
11 |
+
"limit_alphabet": 1000,
|
12 |
+
"no_show_progress": false,
|
13 |
+
"strip_accents": false,
|
14 |
+
"no_handle_chinese_chars": false,
|
15 |
+
"no_clean_text": false,
|
16 |
+
"reserve_tokens": 0,
|
17 |
+
"use_tokenizers": false,
|
18 |
+
"no_fairseq": false,
|
19 |
+
"files": [
|
20 |
+
"/home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/corpora/bio/biomedical-clinical.txt"
|
21 |
+
],
|
22 |
+
"output_root_path": "/home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f",
|
23 |
+
"commit_hash": "3a7116cf776527c411869becbe6fad8b9e3f5e56"
|
24 |
+
}
|
config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"RobertaForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"bos_token_id": 0,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"gradient_checkpointing": false,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 3072,
|
14 |
+
"layer_norm_eps": 1e-05,
|
15 |
+
"max_position_embeddings": 514,
|
16 |
+
"model_type": "roberta",
|
17 |
+
"num_attention_heads": 12,
|
18 |
+
"num_hidden_layers": 12,
|
19 |
+
"pad_token_id": 1,
|
20 |
+
"position_embedding_type": "absolute",
|
21 |
+
"transformers_version": "4.4.0",
|
22 |
+
"type_vocab_size": 1,
|
23 |
+
"use_cache": true,
|
24 |
+
"vocab_size": 52000
|
25 |
+
}
|
dict.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
process.log
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Executing train_tokenizer.py
|
2 |
+
------------------------------
|
3 |
+
training bbpe tokenizer
|
4 |
+
Initialize an empty tokenizer
|
5 |
+
training
|
6 |
+
saving model tokenizer to /home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f/train_tokenizer_output/train-tokenizer-2021-04-26-1009-3a71-e9ca
|
7 |
+
saving pretrained to /home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f/train_tokenizer_output/train-tokenizer-2021-04-26-1009-3a71-e9ca
|
8 |
+
saving config to /home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f/train_tokenizer_output/train-tokenizer-2021-04-26-1009-3a71-e9ca
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2e0c8d7bb348e40327b7c38eb6995c74d5c64345bc4ab9b3deff58e7359f15f7
|
3 |
+
size 504420627
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": true, "errors": "replace", "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "max_len": 512, "special_tokens_map_file": null, "name_or_path": "/home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f/train_tokenizer_output/train-tokenizer-2021-04-26-1009-3a71-e9ca"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|