ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:14.634975Z"
},
"title": "Applying and Sharing pre-trained BERT-models for Named Entity Recognition and Classification in Swedish Electronic Patient Records",
"authors": [
{
"first": "Mila",
"middle": [],
"last": "Grancharova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stockholm University Kista",
"location": {
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stockholm University",
"location": {
"settlement": "Kista",
"country": "Sweden"
}
},
"email": "hercules@dsv.su.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To be able to share the valuable information in electronic patient records (EPR) they first need to be de-identified in order to protect the privacy of their subjects. Named entity recognition and classification (NERC) is an important part of this process. In recent years, general-purpose language models pre-trained on large amounts of data, in particular BERT, have achieved state of the art results in NERC, among other NLP tasks. So far, however, no attempts have been made at applying BERT for NERC on Swedish EPR data. This study attempts to fine-tune one Swedish BERT-model and one multilingual BERT-model for NERC on a Swedish EPR corpus. The aim is to assess the applicability of BERT-models for this task as well as to compare the two models in a domainspecific Swedish language task. With the Swedish model, recall of 0.9220 and precision of 0.9226 is achieved. This is an improvement to previous results on the same corpus since the high recall does not sacrifice precision. As the models also perform relatively well when fine-tuned with pseudonymised data, it is concluded that there is good potential in using this method in a shareable de-identification system for Swedish clinical text.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "To be able to share the valuable information in electronic patient records (EPR) they first need to be de-identified in order to protect the privacy of their subjects. Named entity recognition and classification (NERC) is an important part of this process. In recent years, general-purpose language models pre-trained on large amounts of data, in particular BERT, have achieved state of the art results in NERC, among other NLP tasks. So far, however, no attempts have been made at applying BERT for NERC on Swedish EPR data. This study attempts to fine-tune one Swedish BERT-model and one multilingual BERT-model for NERC on a Swedish EPR corpus. The aim is to assess the applicability of BERT-models for this task as well as to compare the two models in a domainspecific Swedish language task. With the Swedish model, recall of 0.9220 and precision of 0.9226 is achieved. This is an improvement to previous results on the same corpus since the high recall does not sacrifice precision. As the models also perform relatively well when fine-tuned with pseudonymised data, it is concluded that there is good potential in using this method in a shareable de-identification system for Swedish clinical text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Electronic patient records (EPR), also called clinical text, contain valuable information about patients' symptoms, physicians' assessments, diagnoses, treatments and treatment outcomes. Advancements in natural language processing (NLP) and machine learning have made it possible to use large amounts of clinical text to assist physicians and medical researchers in detecting early symptoms of disorders, predicting adverse effects of treatments, etc, see Chapter 10 in (Dalianis, 2018) . However, clinical text contains information that can reveal the identity of patients and other mentioned individuals, so called Protected Health Information (PHI). Methods have been developed to detect this information and obscure it in order to protect people's identities (Meystre et al., 2010; Stubbs et al., 2015) . One important note to make is that de-identified text cannot be guaranteed to be safe to release and must still be handled with great care. A good de-identification system can, however, help facilitate an efficient anonymisation process.",
"cite_spans": [
{
"start": 231,
"end": 236,
"text": "(NLP)",
"ref_id": null
},
{
"start": 470,
"end": 486,
"text": "(Dalianis, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 763,
"end": 785,
"text": "(Meystre et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 786,
"end": 806,
"text": "Stubbs et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study PHI refers only to the named entities which may reveal a person's identity, such as name, age and location. In this sense, detecting and identifying the PHI before obscuring it is a Named Entity Recognition and Classification (NERC) problem. When it comes to data-driven NERC, models based on recurrent neural networks (RNNs) and long short-term memory (LSTM) networks have been successfully used for several languages (L\u00ea et al., 2020; Lange et al., 2019) . In the last two years, however, transformer-based language models such as BERT have achieved stateof-the-art results in several NLP task on commonly used data sets (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 433,
"end": 450,
"text": "(L\u00ea et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 451,
"end": 470,
"text": "Lange et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 637,
"end": 658,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "BERT is a general-purpose language model developed by Devlin et al. (2019) . In essence, BERT is a neural network based on transformers. Transformers are a type of deep learning model designed to handle sequential data, such as natural language text. Since their introduction in 2017 (Vaswani et al., 2017) , transformers have been widely used across a variety of NLP tasks, not least on clinical text (Lewis et al., 2020) . The benefit of transformerbased models over previous architectures is that they do not require the sequential data to be processed in order, allowing for parallelization of the training process. This has made it possible to develop large pre-trained models such as BERT, which have been fitted on larger amounts of data than was previously feasible.",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 284,
"end": 306,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 402,
"end": 422,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the first BERT-model was released in 2018, several models with modified architecture and different data used in pre-training have been released, including the multilingual M-BERT 1 . M-BERT is pre-trained on texts in 104 languages, including Swedish. In 2019, the National Library of Sweden released a Swedish BERT model, KB-BERT 2 , pre-trained exclusively on Swedish texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To use a pre-trained BERT-model for a downstream task, it needs to be fine-tuned for that task. Both KB-BERT and M-BERT have shown success in the NERC task for Swedish when fine-tuned with the publicly available Stockholm-Ume\u00e5 Corpus consisting of Swedish texts from the 1990's (Malmsten et al., 2020) . To our knowledge, however, no previous attempt has been made at using these models for NERC in Swedish EPR data.",
"cite_spans": [
{
"start": 278,
"end": 301,
"text": "(Malmsten et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we attempt to improve NERC performance on Swedish electronic patient records by fine-tuning KB-BERT and M-BERT with domainspecific data. More specifically, our aim is to achieve high recall, which is a priority in the deidentification task, without sacrificing precision. A risk with de-identification methods based on machine learning is that a model trained on sensitive data could be re-engineered, revealing the data. In a BERT-model, there are no links between words in the vocabulary, making it infeasible to retrieve the patient records used for fine-tuning. However, due to names and other personal identifiers appearing in the model's vocabulary, there may be legal issues with releasing a model fine-tuned on patient records. Therefore, in an additional experiment, the models are fine-tuned using pseudonymised patient records to see how NERC performance on authentic records is affected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The outline of this paper is as follows. First, Section 2 presents some previous studies on NERC in clinical text and specifically previous results on the data set at hand. Then, Section 3 describes the data used in this study, gives some more detail on the two BERT models and goes through how the fine-tuning and evaluation are performed. Section 4 presents the results for both models. Finally, the results are discussed in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 M-BERT, https://github.com/ google-research/bert 2 KB-BERT, https://github.com/Kungbib/ swedish-bert-models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are several publicly available BERT-models pre-trained specifically for the biomedical and clinical domains. In 2019, Lewis et al. (2020) released BioBERT 3 , a BERT-model pre-trained on PubMed articles as well as Wikipedia articles and books. The authors present an F 1 -score of approximately 0.87 on the commonly used i2b2 2010 data set for clinical text NERC. In a different 2019 project, (Peng et al., 2019) continued to pre-train the pretrained BERT-model released by (Devlin et al., 2019) on PubMed abstracts and clinical notes. This model, named BlueBERT 4 , reaches an F 1 -score of approximately 0.77 on the i2b2 data set. The same year, (Alsentzer et al., 2019) released clin-icalBERT 5 pre-trained on clinical texts but also specifically on discharge summaries. The combined Bio+Discharge Summary model reaches an F 1 -score of 0.88 on the i2b2 2010 data set. All of these models are only pre-trained on English texts.",
"cite_spans": [
{
"start": 124,
"end": 143,
"text": "Lewis et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 399,
"end": 418,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 480,
"end": 501,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 654,
"end": 678,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "2"
},
{
"text": "For non-English clinical text NERC, some advancements were made in connection to the 2019 shared task MEDDOCAN which consisted of performing NERC on Spanish electronic patient records with annotated PHI. In a submission to the contest Mao and Liu (2019) used M-BERT, which is also pre-trained on Spanish text (Mao and Liu, 2019 ) with a decoding CRF layer for token classification. They also applied some post-processing techniques, achieving F 1 -score and recall of approximately 0.93.",
"cite_spans": [
{
"start": 235,
"end": 253,
"text": "Mao and Liu (2019)",
"ref_id": "BIBREF14"
},
{
"start": 309,
"end": 327,
"text": "(Mao and Liu, 2019",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "2"
},
{
"text": "When it comes to Swedish, several attempts have been made at performing NERC on the annotated data set of electronic patient records Stockholm EPR PHI Corpus. In one study by Berg and Dalianis (2020) the authors extended the annotated data set with data generated using a semi-supervised learning method with the aim of increasing recall without sacrificing precision. The highest recall reported was 0.8920, at which point the precision was 0.9420. These results were achieved using a Conditional Random Field (CRF) model. Grancharova et al. (2020) managed to increase the recall to 0.9209 using the same model by under-sampling negative tokens, thus tokens not belonging to a PHI. However, this came at the cost of significant decrease in precision to 0.8819. Regarding the application of models trained on pseudonymised clinical data for NERC on authentic data, there is a study by Berg et al. (2019) where the authors achieved at highest recall of 0.5510 using a LSTM network. The experiment was repeated with a classic CRF and the recall decreased to 0.4983.",
"cite_spans": [
{
"start": 885,
"end": 903,
"text": "Berg et al. (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "2"
},
{
"text": "This section describes the data, tools and methods used in this study. First, the EPR data set is described in Section 3.1. Then, Section 3.2 describes the BERT-models used and how they were fine-tuned. Lastly, Section 3.3 describes how the models were evaluated in a number of experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Methods",
"sec_num": "3"
},
{
"text": "The data used in this study is Stockholm EPR PHI Corpus 6 Stockholm EPR PHI Corpus is part of the research infrastructure Health Bank -The Swedish Health Records Research Bank 7 . Stockholm EPR PHI Corpus consists of 200,000 tokens with nine annotated PHI classes. See Table 1 for the classes and their distribution.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The annotation of Stockholm EPR PHI Corpus is described in more detail in (Velupillai et al., 2009) . The data was refined in the first de-identification experiment described in (Dalianis and Velupillai, 2010) and has since been used in several studies. Figure 1 shows an example of an pseudonymised annotated record from the data set, followed by an English translation of the same record.",
"cite_spans": [
{
"start": 74,
"end": 99,
"text": "(Velupillai et al., 2009)",
"ref_id": "BIBREF20"
},
{
"start": 178,
"end": 209,
"text": "(Dalianis and Velupillai, 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 254,
"end": 262,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "When formatting the data for fine-tuning, tagged entities consisting of multiple words were split into separate tokens and tagged according to the BIOESstandard. This means marking whether a positive token is in the beginning ('B'), ending ('E') or inside ('I') a named entity, or if the token itself makes up a named entity ('S') (Reimers and Gurevych, 2017) . Negative tokens, thus tokens which are not part of a named entity, were marked 'O'.",
"cite_spans": [
{
"start": 331,
"end": 359,
"text": "(Reimers and Gurevych, 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "This section describes the methods used in this study. First, Section 3.2.1 gives more details on the two pre-trained BERT models used. Then, Section 3.2.2 describes how the models were fine-tuned. 6 This research has been approved by the Swedish Ethical Review Authority under permission no. 2019-05679. ",
"cite_spans": [
{
"start": 198,
"end": 199,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "The BERT-models used in this study are the Swedish KB-BERT and the multilingual M-BERT. Both models implement the BERT-Base architecture consisting of twelve layers with a hidden size of 768 and 11 \u2022 10 7 parameters. KB-BERT was released by the National Library of Sweden in 2019 (Malmsten et al., 2020) . It is pre-trained on approximately 20 GB of digitized Swedish texts written between the years 1940 and 2019. The resources include news articles, legal text, social media posts and Swedish Wikipedia articles. This results in a vocabulary size of around 50,000 tokens. The model is cased, meaning that there are separate entries for tokens beginning with an upper case letter and tokens beginning with a lower case letter. Devlin et al. (2019) released a multilingual BERT model alongside the original English BERT model. The multilingual model used in this study, M-BERT, is the cased version of this model. It has been pre-trained on 104 languages, including Swedish. For each language, the training data consisted of Wikipedia articles written in that language. To balance the data, high-resource languages were under-sampled while low-resource languages were over-sampled using exponentially smoothed weighting of the data. M-BERT has a vocabulary size of around 120,000 tokens.",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "(Malmsten et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 728,
"end": 748,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT models",
"sec_num": "3.2.1"
},
{
"text": "The pre-trained BERT models provide a general representation, or encoding, of input data. To use the models for prediction or inference they need to be fine-tuned for a specific down-stream task. This involves adding an additional output layer and fit- ting the model with task-specific data. In this case, the down-stream task is NERC and the data used for fine-tuning is that described in Section 3.1. The pre-trained models were loaded and fine-tuned using the HuggingFace's Transformers library (Wolf et al., 2020) . Both models were loaded with the library's BertForTokenClassification structure which providers a linear output layer on top of the hiddenstates output.",
"cite_spans": [
{
"start": 499,
"end": 518,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "3.2.2"
},
{
"text": "A challenge with fine-tuning BERT is hyperparameter optimization. The model is sensitive to several parameters such as number of epochs, batch size and learning rate. Devlin et al. (2019) found that for large data sets the hyper-parameters do not have great impact on performance. On smaller data sets, the authors recommend performing some hyper-parameter optimization for the task at hand. Due to the size of the models, the time it takes to fine-tune them presents a limit on how much resources can be delegated to hyper-parameter optimization. In this study, the optimization is limited to a simple parameter search with starting point at the values recommended by Devlin et al. (2019) .",
"cite_spans": [
{
"start": 167,
"end": 187,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 669,
"end": 689,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "3.2.2"
},
{
"text": "This section presents the different experiments performed to generate the results presented in this paper. First, 20% of the original data, selected at random, was held out for testing. Out of the remaining data, 20% was reserved for development. The purpose of the development set was to evaluate different hyper-parameter settings. The remaining data, which we call the training set, was used for fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of methods: Experiments",
"sec_num": "3.3"
},
{
"text": "In addition to the original training set, Stockholm EPR PHI Corpus, we created a version of the training set, Stockholm EPR PHI Pseudo Corpus, where the PHIs have been replaced by surrogates. We call this the pseudonymised training set 8 , or pseudo for short.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of methods: Experiments",
"sec_num": "3.3"
},
{
"text": "The surrogate generation is lexical, based on the collection of Swedish named entity lists used in (Dalianis, 2019) . In this study, however, the variation of surrogate names is much larger, containing 123,000 female first names, 121,000 male first names and 35,000 last names, rather than only the 100 most common first-and last names used in (Dalianis, 2019) .",
"cite_spans": [
{
"start": 99,
"end": 115,
"text": "(Dalianis, 2019)",
"ref_id": "BIBREF6"
},
{
"start": 344,
"end": 360,
"text": "(Dalianis, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Application of methods: Experiments",
"sec_num": "3.3"
},
{
"text": "After fine-tuning on the pseudonymised training set, the models were evaluated on the original test set. The motivation behind these tests is that models trained on pseudonymised data are safer to release for further development by other researchers, without risking that the PHI is revealed. Therefore, it is of interest to see how well such models perform on authentic, not de-identified, patient records.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of methods: Experiments",
"sec_num": "3.3"
},
{
"text": "For both KB-BERT and M-BERT, a search over hyper-parameters was performed. The batch size was set to 16 and the learning rate to 5 \u2022 10 \u22125 . When it comes to the number of epochs, the results differed slightly between the models. Figures 2 and 3 show the precision and recall for different number of epochs over the training set when fine-tuning KB-BERT and M-BERT, respectively. When choosing the number of epochs, most attention was paid to recall as that is of highest priority in a de-identification system. For all models except one, recall either decreased or did not improve significantly after three epochs. Thus, the models were fine-tuned for three epochs. The exception was M-BERT fitted with the pseudonymised data which was fine-tuned for four epochs. The precision was also monitored and it was observed, as the figures show, that precision continued to increase longer than recall. Since recall was prioritized and resources were limited, no experiments were made with training the models further.",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 246,
"text": "Figures 2 and 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Application of methods: Experiments",
"sec_num": "3.3"
},
{
"text": "After the models were fine-tuned, they were evaluated on the original test set, namely the held out data set, 20% of Stockholm EPR PHI Corpus. We call this set test set A. In order to test how well the models perform on a broader range of EPR data, they were also evaluated on other medical specialities of Swedish EPR Corpora from Health Bank. For the purpose of this report, this second test set is 8 Generally, most research on clinical text is carried out on pseudonymised data while most studies on Health Bank data have used real data. called test set B. For the most part, test set B is annotated according to the same standard as Stockholm EPR PHI Corpus but is lacking the Organisation class which is thus excluded from the evaluation on this test set. Further, test set B contains ages and dates but their annotation differs from those in Stockholm EPR PHI Corpus. In order to minimize the error caused by different annotation standards, the classes Age, Date Part and Full Date are also excluded from evaluation.",
"cite_spans": [
{
"start": 401,
"end": 402,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Application of methods: Experiments",
"sec_num": "3.3"
},
{
"text": "The results presented in this section were achieved with the best hyper-parameter values found, see Section 3.3. Note that the hyper-parameter optimization was not exhaustive and this may have significant effects on the results. Table 2 shows the precision (P), recall (R) and F 1 -score for the two models fine-tuned with the original training set, as well as those fine-tuned on the pseudonymised training set, when evaluated on test set A. Table 3 : Precision, recall and F 1 -score of KB-BERT and M-BERT fine-tuned with the original training set and the pseudonymised training set respectively, and evaluated on test set B. Table 4 shows the recall per class for the models fine-tuned with the original training set and evaluated on test set A. Table 5 shows the corresponding results for test set B. In the same manner , Tables 6 and 7 shows the recall per class for all the models fine-tuned with the pseudonymised training set and evaluated on test set A and test set B, respectively. Note that the averages in all tables are weighted based on the number of instances from each class present in the test set at hand. The number of instances per class are given by the figures within the parentheses in the tables' first column.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 443,
"end": 450,
"text": "Table 3",
"ref_id": null
},
{
"start": 628,
"end": 635,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 749,
"end": 756,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 824,
"end": 841,
"text": ", Tables 6 and 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The results show that the fine-tuned KB-BERT achieves recall on the same level as that reported in (Grancharova et al., 2020) on the same data set, see Table 2 . In this study, however, the relatively high recall does not come at the price of low precision. The precision achieved using KB-BERT is on par with the highest recorded precision on Stockholm EPR PHI Corpus which was documented in . There, again, recall was below 0.9. Thus, the BERT-model seems to offer a good balance between precision and recall. From a pure de-identification perspective, high precision is not a priority. However, for the de-identified data to be of use to physicians and researchers, precision remains important. In this sense, the results presented in this paper can be considered an overall improvement of NERC on this data. Regarding the comparison between KB-BERT and M-BERT, the first achieves higher precision and recall on both test sets, see Tables 2 and 3 . The difference is more prevalent in some PHI classes than in others. For instance, the recall on Location drops significantly when using M-BERT compared to using KB-BERT. This suggests that pre-training specialized toward one language is more beneficial than broader pre-training. This is only a speculation since there are other differences between the two models that could affect performance on the task at hand, such as the nature and amount of Swedish texts used in pre-training.",
"cite_spans": [
{
"start": 99,
"end": 125,
"text": "(Grancharova et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 935,
"end": 949,
"text": "Tables 2 and 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "5"
},
{
"text": "It is also worth mentioning that the difference in recall between the two models is small, averaging at approximately 0.5 percentage points when fine-tuning on the original data and 1 percentage point when fine-tuning on the pseudonymised data. Since only a limited amount of time was spent on optimisation, it is possible that M-BERT could achieve results similar to KB-BERT if fine-tuned with better settings or more data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "5"
},
{
"text": "Tables 2 and 3 also show that the models finetuned on the original records perform better than those fine-tuned on the pseudonymised records. This is not surprising, as the surrogates have limited range compared to the authentic named entities. Tables 4 -7 show that, for instance, the recall on Age is more negatively affected by fine-tuning on pseudonymised records than the recall on First name and Last name. An explanation could be that the formats in which surrogate ages are given do not cover all formats present in the authentic records, resulting in greater discrepancies between the training set and the test set when fine-tuning with the pseudonymised records. The formats of names, on the other hand, are less varied in this domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 256,
"text": "Tables 4 -7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "5"
},
{
"text": "Although the models fine-tuned with pseudonymised data perform worse overall, the differences between them and the same models fine-tuned with the original data are not huge. In some cases, such as Phone number in M-BERT, the pseudonymised model actually performs better, see Tables 4 and 6 . It is clear that the BERT-models are less sensitive to the discrepancies between the original and pseudonymised data than the CRF and LSTM models used on this data set previously, see Section 2 Related research and (Berg et al., 2019) . This suggests that this method should be explored further for the purpose of being able to share models trained on electronic patient records while reducing the risks of breaching the privacy of patients or other individuals mentioned in the text.",
"cite_spans": [
{
"start": 508,
"end": 527,
"text": "(Berg et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 276,
"end": 290,
"text": "Tables 4 and 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "5"
},
{
"text": "A comparison between Table 2 and Table 3 demonstrates that there is a loss in recall and an even greater loss in precision when applying the models to data in a slightly broader domain. Differences in the annotation of the two test sets make a direct comparison difficult, but it is clear that the models have learned enough to generalize relatively well to a broader range of electronic patient records. Future work includes creating more annotated data for evaluation as well as training on a broader range of records in order to improve generalization.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 40,
"text": "Table 2 and Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "5"
},
{
"text": "In summary, this paper presents an improvement on previous results on the Stockholm EPR PHI Corpus in the sense that the same high recall is achieved without sacrificing precision. It is also demonstrated that performance is somewhat negatively affected by fine-tuning on pseudonymised electronic patient records but the models still achieve relatively high recall. Due to the benefit of being able to share non-sensitive models in compliance with preserving the privacy of patients, this approach should be studied and developed further. The results also show that KB-BERT outperforms M-BERT overall but both models perform relatively well. We can not make any concrete conclusions on the limitations of the models due to the limited resources delegated to optimisation and the limited data used for fine-tuning. Future work includes optimising the models further and fine-tuning on a larger data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "5"
},
{
"text": "On a final note, even with a de-identification system with high recall, the de-identified data could be re-identified using external sources. Therefore, the de-identified data must be be handled with care. To improve the privacy where there could be some false negatives, thus missed PHI, one could remove the tags of the true positive so the false negatives are not distinguishable, performing what is known as HIPS (Hide In Plain Sight) (Carrell et al., 2013) .",
"cite_spans": [
{
"start": 439,
"end": 461,
"text": "(Carrell et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "5"
},
{
"text": "BioBERT, https://github.com/dmis-lab/ biobert 4 BlueBERT, https://github.com/ncbi-nlp/ bluebert 5 clinicalBERT, https://github.com/ EmilyAlsentzer/clinicalBERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the DataLEASH project for funding this research work. Great thanks also to John Valik Karlsson, M.D., for the assistance with the translation of the patient record to English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly Available Clinical BERT Embeddings. NAACL HLT 2019",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Willie",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew Ba",
"middle": [],
"last": "Wa Redmond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John R Murphy, Willie Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, WA Red- mond, and Matthew BA McDermott. 2019. Publicly Available Clinical BERT Embeddings. NAACL HLT 2019, page 72.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Building a De-identification System for Real Swedish Clinical Text Using Pseudonymised Clinical Text",
"authors": [
{
"first": "Hanna",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Taridzo",
"middle": [],
"last": "Chomutare",
"suffix": ""
},
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)",
"volume": "",
"issue": "",
"pages": "118--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna Berg, Taridzo Chomutare, and Hercules Dalia- nis. 2019. Building a De-identification System for Real Swedish Clinical Text Using Pseudonymised Clinical Text. In Proceedings of the Tenth Interna- tional Workshop on Health Text Mining and Infor- mation Analysis (LOUHI 2019), pages 118-125.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semi-supervised Approach for De-identification of Swedish Clinical Text",
"authors": [],
"year": null,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "2020",
"issue": "",
"pages": "4444--4450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semi-supervised Approach for De-identification of Swedish Clinical Text. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, pages 4444-4450.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hiding in plain sight: use of realistic surrogates to reduce exposure of protected health information in clinical text",
"authors": [
{
"first": "David",
"middle": [],
"last": "Carrell",
"suffix": ""
},
{
"first": "Bradley",
"middle": [],
"last": "Malin",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bayer",
"suffix": ""
},
{
"first": "Cheryl",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of the American Medical Informatics Association",
"volume": "20",
"issue": "2",
"pages": "342--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Carrell, Bradley Malin, John Aberdeen, Samuel Bayer, Cheryl Clark, Ben Wellner, and Lynette Hirschman. 2013. Hiding in plain sight: use of realistic surrogates to reduce exposure of pro- tected health information in clinical text. Journal of the American Medical Informatics Association, 20(2):342-348.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Clinical text mining: Secondary use of electronic patient records",
"authors": [
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hercules Dalianis. 2018. Clinical text mining: Sec- ondary use of electronic patient records. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pseudonymisation of Swedish Electronic Patient Records Using a Rule-Based Approach",
"authors": [
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on NLP and Pseudonymisation",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hercules Dalianis. 2019. Pseudonymisation of Swedish Electronic Patient Records Using a Rule- Based Approach. In Proceedings of the Workshop on NLP and Pseudonymisation, pages 16-23, Turku, Finland. Link\u00f6ping Electronic Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deidentifying Swedish Clinical Text -Refinement of a Gold Standard and Experiments with Conditional Random Fields",
"authors": [
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Biomedical Semantics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hercules Dalianis and Sumithra Velupillai. 2010. De- identifying Swedish Clinical Text -Refinement of a Gold Standard and Experiments with Conditional Random Fields. Journal of Biomedical Semantics, 1:6.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova. 2019. http://arxiv.org/abs/1810.04805 BERT: Pre- training of Deep Bidirectional Transform- ers for Language Understanding. In arXiv, https://arxiv.org/abs/1810.04805.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving Named Entity Recognition and Classification in Class Imbalanced Swedish Electronic Patient Records through Resampling",
"authors": [
{
"first": "Mila",
"middle": [],
"last": "Grancharova",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Eighth Swedish Language Technology Conference (SLTC) 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mila Grancharova, Hanna Berg, and Hercules Dalianis. 2020. Improving Named Entity Recognition and Classification in Class Imbalanced Swedish Elec- tronic Patient Records through Resampling. In Pro- ceedings of Eighth Swedish Language Technology Conference (SLTC) 2020, G\u00f6teborg, Sweden.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "NLNDE: The Neither-Language-Nor-Domain-Experts' Way of Spanish Medical Document De-Identification",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Lange",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Iberian Languages Evaluation Forum",
"volume": "",
"issue": "",
"pages": "671--678",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Lange, Heike Adel, and Jannik Str\u00f6tgen. 2019. NLNDE: The Neither-Language-Nor- Domain-Experts' Way of Spanish Medical Doc- ument De-Identification. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019)), pages 671-678.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "146--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Myle Ott, Jingfei Du, and Veselin Stoyanov. 2020. Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art. In Proceedings of the 3rd Clinical Natural Language Processing Work- shop, pages 146-157.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On the Vietnamese Name Entity Recognition: A Deep Learning Method Approach",
"authors": [
{
"first": "Ngoc",
"middle": [
"C"
],
"last": "L\u00ea",
"suffix": ""
},
{
"first": "Ngoc-Ye",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Anh-Duong",
"middle": [],
"last": "Trinh",
"suffix": ""
},
{
"first": "Hue",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ngoc C. L\u00ea, Ngoc-Ye Nguyen, Anh-Duong Trinh, and Hue Vu. 2020. On the Vietnamese Name Entity Recognition: A Deep Learning Method Approach. In IEEE Access.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Playing with Words at the National Library of Sweden -Making a Swedish BERT",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Malmsten",
"suffix": ""
},
{
"first": "Love",
"middle": [],
"last": "B\u00f6rjeson",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Haffenden",
"suffix": ""
}
],
"year": 2020,
"venue": "arXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Malmsten, Love B\u00f6rjeson, and Chris Haffenden. 2020. http://arxiv.org/abs/2007.01658 Playing with Words at the National Library of Sweden -Mak- ing a Swedish BERT. In arXiv, https://arxiv. org/abs/2007.01658.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hadoken: a BERT-CRF Model for Medical Document Anonymization",
"authors": [
{
"first": "Jihang",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Wanli",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Iberian Languages Evaluation Forum",
"volume": "",
"issue": "",
"pages": "720--726",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihang Mao and Wanli Liu. 2019. Hadoken: a BERT- CRF Model for Medical Document Anonymization. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019)), pages 720-726.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic de-identification of textual documents in the electronic health record: a review of recent research",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Stephane M Meystre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Friedlin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "South",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"H"
],
"last": "Shen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samore",
"suffix": ""
}
],
"year": 2010,
"venue": "BMC medical research methodology",
"volume": "10",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephane M Meystre, F Jeffrey Friedlin, Brett R South, Shuying Shen, and Matthew H Samore. 2010. Au- tomatic de-identification of textual documents in the electronic health record: a review of recent research. BMC medical research methodology, 10(1):70.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. http://arxiv.org/abs/1906.05474 Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmark- ing Datasets. In arXiv, https://arxiv.org/ abs/1906.05474.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.06799"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Opti- mal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks. arXiv preprint arXiv:1707.06799.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1",
"authors": [
{
"first": "Amber",
"middle": [],
"last": "Stubbs",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Kotfila",
"suffix": ""
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Biomedical Informatics",
"volume": "58",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amber Stubbs, Christopher Kotfila, and \u00d6zlem Uzuner. 2015. Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1. Journal of Biomedical Informatics, 58:S11-S19.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Developing a standard for de-identifying electronic patient records written in Swedish: precision, recall and F-measure in a manual and computerized annotation trial",
"authors": [
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Hassel",
"suffix": ""
},
{
"first": "Gunnar H",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2009,
"venue": "International Journal of Medical Informatics",
"volume": "78",
"issue": "12",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumithra Velupillai, Hercules Dalianis, Martin Hassel, and Gunnar H Nilsson. 2009. Developing a standard for de-identifying electronic patient records written in Swedish: precision, recall and F-measure in a manual and computerized annotation trial. Interna- tional Journal of Medical Informatics, 78(12):e19- e26.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Hugging-Face's Transformers: State-of-the-art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Can- wen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. http://arxiv.org/abs/2007.01658 Hugging- Face's Transformers: State-of-the-art Natural Lan- guage Processing. In arXiv, https://arxiv. org/abs/1910.03771.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Example of a pseudonymised electronic patient record in Swedish from Stockholm EPR PHI Corpus and its translation to English."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Precision on the development set after different number of epochs for all four models."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Recall on the development set after different number of epochs for all four models."
},
"TABREF1": {
"html": null,
"num": null,
"text": "The class distribution of Stockholm EPR PHI Corpus.",
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Table 3shows the corresponding scores for test set B.",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Model Data</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td>KB</td><td colspan=\"4\">Original 0.9226 0.9220 0.9223</td></tr><tr><td/><td>Pseudo</td><td colspan=\"3\">0.8827 0.8822 0.8824</td></tr><tr><td>M</td><td colspan=\"4\">Original 0.9051 0.8899 0.8974</td></tr><tr><td/><td>Pseudo</td><td colspan=\"3\">0.8602 0.8357 0.8478</td></tr></table>"
},
"TABREF4": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">: Precision, recall and F 1 -score of KB-</td></tr><tr><td colspan=\"5\">BERT and M-BERT fine-tuned with the original</td></tr><tr><td colspan=\"5\">training set and the pseudonymised training set re-</td></tr><tr><td colspan=\"4\">spectively, and evaluated on test set A.</td></tr><tr><td colspan=\"2\">Model Data</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td>KB</td><td colspan=\"4\">Original 0.6923 0.7272 0.7093</td></tr><tr><td/><td>Pseudo</td><td colspan=\"3\">0.6427 0.7439 0.6896</td></tr><tr><td>M</td><td colspan=\"4\">Original 0.6494 0.6847 0.6666</td></tr><tr><td/><td>Pseudo</td><td colspan=\"3\">0.6398 0.6963 0.6669</td></tr></table>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Recall per class of the models fine-tuned</td></tr><tr><td colspan=\"3\">with the original training set and evaluated on test</td></tr><tr><td>set A.</td><td/><td/></tr><tr><td>Class (instances)</td><td colspan=\"2\">KB-BERT M-BERT</td></tr><tr><td>First Name(208)</td><td>0.7212</td><td>0.7596</td></tr><tr><td>Last Name (282)</td><td>0.7270</td><td>0.6915</td></tr><tr><td>Phone Number (22)</td><td>0.8636</td><td>0.7727</td></tr><tr><td>Health Care Unit (208)</td><td>0.7163</td><td>0.6394</td></tr><tr><td>Location (57)</td><td>0.7368</td><td>0.5088</td></tr><tr><td>Weighted average</td><td>0.7272</td><td>0.6847</td></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"text": "Recall per class of the models fine-tuned with the original training set and evaluated on test set B.",
"type_str": "table",
"content": "<table><tr><td>Class (instances)</td><td colspan=\"2\">KB-BERT M-BERT</td></tr><tr><td>First Name (195)</td><td>0.9128</td><td>0.8564</td></tr><tr><td>Last Name (213)</td><td>0.9202</td><td>0.8638</td></tr><tr><td>Phone Number (21)</td><td>0.8095</td><td>0.9048</td></tr><tr><td>Age (9)</td><td>1.0000</td><td>0.8889</td></tr><tr><td>Full Date (83)</td><td>0.9398</td><td>0.8554</td></tr><tr><td>Date Part (131)</td><td>0.9695</td><td>0.9847</td></tr><tr><td>Health Care Unit (293)</td><td>0.8029</td><td>0.7577</td></tr><tr><td>Location (19)</td><td>0.6842</td><td>0.4737</td></tr><tr><td>Organisation (10)</td><td>0.6000</td><td>0.5000</td></tr><tr><td>Weighted average</td><td>0.8822</td><td>0.8357</td></tr></table>"
},
"TABREF8": {
"html": null,
"num": null,
"text": "Recall per class of the models fine-tuned with the pseudonymised version of the training set and evaluated on test set A.",
"type_str": "table",
"content": "<table/>"
},
"TABREF10": {
"html": null,
"num": null,
"text": "Recall per class of the models fine-tuned with the pseudonymised version of the training set and evaluated on test set B.",
"type_str": "table",
"content": "<table/>"
}
}
}
}