ACL-OCL / Base_JSON /prefixN /json /nllp /2021.nllp-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:58.396480Z"
},
"title": "Named Entity Recognition in Historic Legal Text: A Transformer and State Machine Ensemble Method",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Trias",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hongming",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Jaume",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Stratos",
"middle": [],
"last": "Idreos",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Older legal texts are often scanned and digitized via Optical Character Recognition (OCR), which results in numerous errors. Although spelling and grammar checkers can correct much of the scanned text automatically, Named Entity Recognition (NER) is challenging, making correction of names difficult. To solve this, we developed an ensemble language model using a transformer neural network architecture combined with a finite state machine to extract names from English-language legal text. We use the USbased English language Harvard Caselaw Access Project for training and testing. Then, the extracted names are subjected to heuristic textual analysis to identify errors, make corrections, and quantify the extent of problems. With this system, we are able to extract most names, automatically correct numerous errors and identify potential mistakes that can later be reviewed for manual correction.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Older legal texts are often scanned and digitized via Optical Character Recognition (OCR), which results in numerous errors. Although spelling and grammar checkers can correct much of the scanned text automatically, Named Entity Recognition (NER) is challenging, making correction of names difficult. To solve this, we developed an ensemble language model using a transformer neural network architecture combined with a finite state machine to extract names from English-language legal text. We use the USbased English language Harvard Caselaw Access Project for training and testing. Then, the extracted names are subjected to heuristic textual analysis to identify errors, make corrections, and quantify the extent of problems. With this system, we are able to extract most names, automatically correct numerous errors and identify potential mistakes that can later be reviewed for manual correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Examining historical legal texts offers insight into the development of legal thinking and the practice of law. In order to facilitate computer processing, older legal texts are typically scanned from paper, microfilm or other physical media and then converted to text using Optical Character Recognition (OCR), which introduces numerous errors. Many of these errors can be corrected automatically using spelling and grammar correcting systems. However, the names of people, places and other proper names cannot be corrected easily, making the study of lawyers, judges and other people unreliable (Hamdi et al., 2020) . One use of reliable names is inferring personal biases and connections that may affect outcomes (Clarke, 2018) . In order to address this problem, names need to be accurately identified in the text and then corrected and standardized in a process often called Named Entity Disambiguation or NED (Yamada et al., 2016) . Nonetheless, extracting accurate names is only part of the solution. In the future, organization names must also be extracted, and the respective roles must be identified.",
"cite_spans": [
{
"start": 597,
"end": 617,
"text": "(Hamdi et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 716,
"end": 730,
"text": "(Clarke, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 915,
"end": 936,
"text": "(Yamada et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The process of computationally extracting names from text is more formally called Named Entity Recognition (NER) and has been a difficult problem for many decades (Nadeau and Sekine, 2007) . Furthermore, extracting names in legal text provides many domain-specific challenges (Bikel et al., 1999) .",
"cite_spans": [
{
"start": 163,
"end": 188,
"text": "(Nadeau and Sekine, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 276,
"end": 296,
"text": "(Bikel et al., 1999)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes a two-pronged approach for extracting the names of lawyers arguing cases: (i) Extract the lawyer names from text using our ensemble model based on a neural network and a state machine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(ii) Identify and correct transcription errors to uniquely identify lawyers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system for extracting the names of lawyers in legal text uses a transformer-based neural network (Vaswani et al., 2017) feeding a finite state machine. After extraction, the identified names are subjected to several heuristic rules to identify errors, misspelled names and name variations in order to attempt to uniquely identify the lawyers named. When errors cannot be corrected automatically, such as in names with alternative spellings, the extent of the errors is quantified.",
"cite_spans": [
{
"start": 101,
"end": 123,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to develop, train and test this system, we used legal cases from the Harvard Caselaw Access Project (Harvard University, 2018) . This project includes the complete text of decisions from United States courts dating back to the 1700s, with over 40 million pages of text spanning over 360 years. In our analysis, we only focused on cases from 1900 to 2010 in jurisdictions that were states as of 1900. Thus, Alaska and Hawaii were not considered. Because states and courts often have different reporting styles that have varied substantially over the years, we segmented most of our analysis by state and then by decade.",
"cite_spans": [
{
"start": 109,
"end": 135,
"text": "(Harvard University, 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a typical case text in the United States, lawyers are only identified in the header section on the first page using a relatively standardized format, usually called the \"party names\". They are rarely mentioned by name in the decision text. A typical party names text would read as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "David P. Sutton, Asst. Corp. Counsel, with whom Charles T. Duncan, Corp. Counsel, Hubert B. Pair, Principal Asst. Corp. Counsel, and Richard W. Barton, Asst. Corp. Counsel, were on the brief, for appellant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "The text is usually a single complex sentence where all principal people, firms and their roles are identified in a mostly standardized and stylized format. Because of the sentence's complexity, the text is sometimes difficult for non-lawyers and automated systems to decipher. The parsing problem is compounded because the style standards and norms vary by location and over time. In addition to containing spelling and transcription errors, words and names are sometimes given as initials, nicknames or abbreviations. All these types of things confound automated systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Thus, a solution to the problem can be divided into two parts: (i) extract the names from the text and (ii) standardize the name to identify individuals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "One solution to the the first part of extracting names is documented by Dozier et al. (2010) , who describe their work at Westlaw (now part of Thomson Reuters) in 2000 identifying entities in US case law using Bayesian modeling (Dozier and Haschart, 2000) . Their process extracts more than just names and involves parsing words in part by using a finite state machine that is specially tailored for each jurisdiction. More recently, Wang et al. (2020) propose a solution based on a neural network architecture that performs well across various domains, including legal text. In addition, Leitner et al. (2019) have developed a very promising system to perform NER in German legal texts that was built and trained on their own dataset (Leitner et al., 2020) . This dataset was also used by Bourgonje et al. (2020) in their NER work based on BERT.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "Dozier et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 228,
"end": 255,
"text": "(Dozier and Haschart, 2000)",
"ref_id": "BIBREF7"
},
{
"start": 434,
"end": 452,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 589,
"end": 610,
"text": "Leitner et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 735,
"end": 757,
"text": "(Leitner et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 790,
"end": 813,
"text": "Bourgonje et al. (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "These approaches apply generically to the entire legal text and are not focused on the grammatically challenging party names text. In any case, lack of a similar dataset for English prevents us from trying these approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Our system differs from previous attempts in that it is an ensemble composed of a transformerbased neural network and a state machine rather than a single architecture. The state machine allows the inclusion of pre-established knowledge of the syntax and style of the named parties text, thereby increasing accuracy. This increased the accuracy by 10% compared with the state-of-the-art transformer-based FLERT model (Schweter and Akbik, 2021) .",
"cite_spans": [
{
"start": 417,
"end": 443,
"text": "(Schweter and Akbik, 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Contemporary texts are usually digitally encoded at creation and thus do not suffer from OCR-related errors. In addition, many anachronisms such as the practice of using just initials have been supplanted over time so that currently almost all lawyers use their full name and middle initial. Abbreviations of names, such as \"Geo.\" for \"George\", have also fallen out of favor. This simplifies the identification of names in contemporary texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Historical vs. Contemporary Names",
"sec_num": "3"
},
{
"text": "In addition, contemporary names can be cross referenced with a standardized list of names such as Westlaw, Bloomberg, Martindale-Hubbell and similar directories that contain an almost comprehensive list of lawyers that can be used to uniquely identify individuals in the United States. However, such lists are subject to licensing restrictions and they have limited data for lawyers from the distant past.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Historical vs. Contemporary Names",
"sec_num": "3"
},
{
"text": "Another consideration is that the same name can be used at different times and in different places to refer to different people. Thus, a name is generally only unique for a specific time and place. A related problem is when the same name is used by a parent and child and only differentiated by the use of \"Sr.\", \"Jr.\", \"II\", or similar suffix, if at all. Yet another related problem is the use of an initial instead of a first name. This problem will be discussed in more detail in section 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Historical vs. Contemporary Names",
"sec_num": "3"
},
{
"text": "Our source code is available at https:// harvard-almit.github.io/legal-nlp. To summarize, we developed a new system that combines a transformer neural network with a finite state machine. In addition we trained an existing architecture with a subset of the Harvard Caselaw Access Project. These two were compared with the benchmark FLERT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture & Experiments",
"sec_num": "4"
},
{
"text": "\u2022 FLERT: Pre-trained general-purpose Englishlanguage NER model trained on CoNLL03.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture & Experiments",
"sec_num": "4"
},
{
"text": "\u2022 HCL-NER: Architecture based on FLERT but trained on Harvard Caselaw Access Project instead of CoNLL03.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture & Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Ensemble: An ensemble model based on Flair's pre-trained PoS model and a custom state machine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture & Experiments",
"sec_num": "4"
},
{
"text": "In recent years, numerous neural network models have been developed that can be used to perfrom NER, including Stanford NER, CMU, Flair, ELMo, BERT and many others. When we chose our benchmark to compare with our work, BERT was one candidate because it has been used extensively in a legal context, particularly for text classification by Chalkidis et al. (2020) in LEGAL-BERT. On the other hand, Flair (Akbik et al., 2018) is an easy-to-use Python framework that includes many pre-trained models, and provides more flexibility to extend our work at a later time. In addition to a BERT model, it includes it's own transformerbased model specifically trained for NER, which will serve as our benchmark, and another model for Parts of Speech (POS), which will be used by the ensemble model. Flair's NER model is called FLERT (Schweter and Akbik, 2021) and uses GLoVe (Pennington et al., 2014) global vectors for word embedding along with a Tranformer architecture based on XLM-RoBERTa (Conneau et al., 2020) . Although FLERT performs well in ordinary text including the decision text, it does get tripped up by the party names text in the headers of legal decisions that name lawyers. To see why, consider the following example:",
"cite_spans": [
{
"start": 339,
"end": 362,
"text": "Chalkidis et al. (2020)",
"ref_id": "BIBREF3"
},
{
"start": 403,
"end": 423,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 823,
"end": 849,
"text": "(Schweter and Akbik, 2021)",
"ref_id": "BIBREF19"
},
{
"start": 865,
"end": 890,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 983,
"end": 1005,
"text": "(Conneau et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FLERT & Transformers",
"sec_num": "4.1"
},
{
"text": "FLERT will return \"Thomas A\" and \"Charles Roach\". It misses McHarg's last name, \"Victor E. Keyes\" and \"Bently M. McMullin\". Thus, even ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FLERT & Transformers",
"sec_num": "4.1"
},
{
"text": "We hypothesized that one reason that FLERT does not perform well is that it is trained using the CoNLL03 dataset (Tjong Kim Sang and De Meulder, 2003) , which consists of Reuters news stories from 1996 to 1997. Because the style of this training text is different from the style of text we aim to parse, we decided to train a new model with an identical architecture to FLERT, but using the Harvard Caselaw dataset instead of CoNLL03. We call this model HCL-NER. Because it duplicates the FLERT architecture, HCL-NER employs a multilingual XLM-RoBERTa (XLM-R) transformerbased model and GLoVe embeddings. The training, test and validation data comprised of 1000 cases selected randomly from the Harvard Caselaw Access Project. Of the 1000 cases, 100 were be reserved for development, 300 for training, 300 for testing and 300 for final validation of the model. These cases were parsed using an early version of the ensemble model described below and then manually reviewed and corrected. Three tags were used for tagging:",
"cite_spans": [
{
"start": 113,
"end": 150,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HCL-NER Model",
"sec_num": "4.2"
},
{
"text": "\u2022 LOC: A location, such as a city or state",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HCL-NER Model",
"sec_num": "4.2"
},
{
"text": "\u2022 ORG: An organization, company or law firm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HCL-NER Model",
"sec_num": "4.2"
},
{
"text": "\u2022 PER: A person",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HCL-NER Model",
"sec_num": "4.2"
},
{
"text": "Like FLERT, the model was trained using Stochastic Gradient Descent (SGD) for 150 epochs with the objective of maximizing the F 1 score (Sasaki, 2007) . As will be shown later, this model performed slightly better than FLERT but did not perform as well as the ensemble model described below. We suspect that using a larger number of test cases in the future may improve performance.",
"cite_spans": [
{
"start": 136,
"end": 150,
"text": "(Sasaki, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HCL-NER Model",
"sec_num": "4.2"
},
{
"text": "The results of the HCL-NER model validation for each tag for are summarized in table 1. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HCL-NER Model",
"sec_num": "4.2"
},
{
"text": "In addition to the HCL-NER model, we created an ensemble model that uses a pre-trained transformer to parse a sentence in order to identify Parts of Speech (PoS). That output is fed to a custom finite state machine that will represent knowledge of the writing style in order to identify the people, firms and roles of the individuals. Figure 1 shows a high level diagram of the ensemble. Although this paper focuses only on the names of people, the framework for identifying the firms and roles is also included, but not tested. This model performed the best of the various models tested.",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 343,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "The ensemble model consists of three distinct stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "(i) Tokenize the text into distinct symbols and words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "(ii) Tag the tokens with the Part of Speech (PoS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "(iii) Pass the PoS and tokens to the state machine to extract items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "Tokenization is performed using Flair's default tokenizer, which is based on the 'segtok' library. PoS tagging is performed with Flair's standard PoS model, which is based on Long Short-Term Memory (LSTM) for sequence tagging along with a Conditional Random Field (CRF) layer (LSTM-CRF) (Huang et al., 2015) . It identifies a number of different parts of speech. Of these, we are interested in:",
"cite_spans": [
{
"start": 287,
"end": 307,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "\u2022 NN: Noun, singular or mass; in addition, Flair provides finer grain identification, such as NNP for a proper noun, NNPS for a plural proper noun, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "\u2022 ',' and '.': Punctuation symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "\u2022 IN: Preposition or subordinating conjunction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "\u2022 CC: Coordinating conjunction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "\u2022 Other: Other PoS elements are ignored for now, but they may become useful if the state machine is extended to identify the person's role and other data included in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.3"
},
{
"text": "In it's simplest form, the state machine transitions from one state to another based on the current state, the next token PoS, the token text (in the case of \"Mr.'', \"Hon.\", etc.). However, there are a few cases where transitions will also look ahead several tokens to disambiguate particular esoteric cases. When the state transitions, it returns the text up to the transition marked with the state before the transition. The table 2 summarizes the transitions (including look-ahead exceptions in the notes). In the table, transitions are processed from top to bottom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Machine",
"sec_num": "4.4"
},
{
"text": "Listed below are the three types of abbreviations that concern us. Each of these will be handled separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abbreviations & Nicknames",
"sec_num": "4.5"
},
{
"text": "\u2022 Use of an initial instead of first name",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abbreviations & Nicknames",
"sec_num": "4.5"
},
{
"text": "\u2022 Abbreviations of words and titles (such as \"Asst.\", \"Atty.\", etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abbreviations & Nicknames",
"sec_num": "4.5"
},
{
"text": "\u2022 Abbreviations of names (such as \"Geo.\", \"Thos.\". etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abbreviations & Nicknames",
"sec_num": "4.5"
},
{
"text": "The simplest of these are the last two: the abbreviations of names, words and titles. These are identified by a period. When encountering a period, the previous word is then looked up in a table of known and typical abbreviations. If it is found, then the period is ignored since it does not mark the end of a sentence and the abbreviation is substituted with the corresponding word or name. Many databases of abbreviations exist for this purpose. In our example implementation, we used the list from Wiktionary (Wikitionary, 2021) .",
"cite_spans": [
{
"start": 512,
"end": 531,
"text": "(Wikitionary, 2021)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abbreviations & Nicknames",
"sec_num": "4.5"
},
{
"text": "Nicknames are identified similarly to abbreviations. Once a name has been identified, the first name is looked up in a database of known basic nicknames and the formal name is substituted. In our implementation, the list from Northern and Nelson (2011) is used. However, care is required because in some cases, a nickname can be the actual name or can be used to differentiate two people with the same name. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abbreviations & Nicknames",
"sec_num": "4.5"
},
{
"text": "In this section, we evaluate the performance of our system regarding misspelled names and the use of initials instead of names and finally present a comparison analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Analysis",
"sec_num": "5"
},
{
"text": "For all models, after a name has been extracted, it must be evaluated for possible misspellings due to OCR, transcription or other errors. These accidental misspellings are difficult to distinguish from deliberate alternate spellings. For example,\"Clair\", \"Claire\", and \"Clare\" are all possible misspellings of the same name, or they could be deliberate alternate spellings referring to different names. In the past, it was not uncommon for people to use various spellings for the same name depending on local conventions. For this reason, correcting for spelling mistakes in names is tricky and may produce more errors than it fixes if not done carefully. The problem of automatically adjusting misspelled names has been researched since at least the 1920s, beginning with Soundex, a system for encoding words phonetically. Famously, in the 1970s, the New York State Identification and Intelligence System (NYSIIS) (Silbert, 1970) attempted to solve this problem for criminal databases. Snae (2007) summarizes the results of many other name spelling matching systems, including NYSIIS. According to results from that paper, the Double Metaphone algorithm developed by Philips (Philips, 2000) for English seems to perform well in a variety of situations and especially with names.",
"cite_spans": [
{
"start": 916,
"end": 931,
"text": "(Silbert, 1970)",
"ref_id": "BIBREF20"
},
{
"start": 988,
"end": 999,
"text": "Snae (2007)",
"ref_id": "BIBREF21"
},
{
"start": 1177,
"end": 1192,
"text": "(Philips, 2000)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Misspelled Names",
"sec_num": "5.1"
},
{
"text": "The Double Metaphone algorithm works by creating a phonetic encoding of a word. To do so, it has a simplified alphabet of 16 consonants plus vowels. The algorithm is a sequence of steps that involve dropping and converting letters until the final encoding is achieved. For example, it begins by dropping duplicates and silent first letters using a few heuristic rules. Thus, for the first step, \"written\" becomes \"riten\". A number of subsequent steps convert letter combinations with similar sounds to the same code. In this way, \"sack\" becomes \"sak\" and \"enough\" becomes \"enouf\". Finally, all vowels except the first are removed. Thus \"enough\" is finally encoded as \"enf\". In the example for \"Clair\", \"Claire\" and \"Clare\" in the paragraph above, all encodings would resolve to \"clr\" and the algorithm would determine that they are all potentially the same name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Misspelled Names",
"sec_num": "5.1"
},
{
"text": "We used the Double Metaphone to match names in the database in order to count the number of potential misspellings and yield an estimate of the 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 Year worst-case scenario. For each state and decade, we created a list of all unique names in the dataset. Then we evaluated the Double Metaphone encoding for all names in each list and counted the number of differently spelled names that evaluate to the same encoding, using that number to calculate a percent of total names that are potentially duplicates. The boxplot in Figure 2 shows the distribution of this percent for each state and decade, which shows that the percent of potential misspellings is declining over time. This is expected due to the adoption of entirely digital creation and storage of data and the degradation of older original paper sources that result in more errors for older material.",
"cite_spans": [
{
"start": 144,
"end": 203,
"text": "1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010",
"ref_id": null
}
],
"ref_spans": [
{
"start": 578,
"end": 586,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Misspelled Names",
"sec_num": "5.1"
},
{
"text": "Another difficult problem is resolving the use of an initial as the first name. Although this was very common in some jurisdictions decades ago, the problem is almost non-existent today as shown by Table 3 . The table compares the use of initials in 1900 to the use of initials in 2010. Massachusetts, the worst offender in 1900 at 79%, now has approximately only 0.1% use of initials. The highest level of current usage of initials is Kansas at 0.7%. The problem with initials arises when they are ambiguous and could refer to one or more lawyers. Unfortunately, this problem is impossible to quantify without having a comprehensive list of lawyer names. However, a rough estimate of the extent of the problem can be calculated by looking at the names and seeing if the use of initials could refer to several lawyers whose full names are used elsewhere. In other words, we load up all the names and see if the names with only initials can match full names. Over time, different lawyers could in Even though this is not the complete picture, it is possible to estimate that the problem of using an initial for the first name may not be too serious when identifying unique lawyers.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Use of Initials Instead of Names",
"sec_num": "5.2"
},
{
"text": "Model performance is compared using 10 random cases from each state, for a total of 490 cases. Each of these texts is reviewed manually to extract the names. First, in order to simplify evaluation and make it more explainable, the accuracy is measured as the percent of the test cases where the T P T P +F P ; Rec., Recall or sensitivity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "T P T P +F N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "model correctly identified all names. Next, recall and precision are calculated for all test cases. Table 5 summarizes the accuracy results. The ensemble model outperformed both FLERT trained on CoNLL03 and HCL-NER trained on a subset of the Harvard Caselaw Access Project. Table 6 summarizes the confusion matrices for all test cases, with the following measures:",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 275,
"end": 282,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "True positives (TP): Count of names in the original text correctly identified by the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "False positives (FP): Count of identified names that were not valid names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "False negatives (FN): Count of names in the original text that were not identified by the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "Recall (Rec.) or sensitivity: True positives divided by all real positives (true positives plus false negatives). This is the portion of actual names that are correctly identified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "Both recall and precision for the ensemble models are improved relative to the FLERT benchmark. Thus, the ensemble model is able to more accurately identify all the names in the text and is also less likely to misidentify names. Although more true positives were identified, the biggest gains were in substantially fewer false negatives and positives. In addition, even though the HCL-NER model was not trained with a large dataset, it also is an improvement over the FLERT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.3"
},
{
"text": "We propose an ensemble model based on a transformer neural network architecture and a state machine for extracting names from party names in US case law text. The ensemble model improved the accuracy by approximately 10% from the FLERT model. However, once names were extracted, the problem of correcting for errors, misspellings and ambiguities could be fully automated. A number of techniques were discussed that help to quantify the extent of the problem and identify potential data quality issues. Our analysis showed that the number of errors in some jurisdictions could exceed 15% in the decades before 1950, but that this number has been declining significantly over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "We would like to thank the Harvard Caselaw Access Project for providing us with support and access to their database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th international conference on computational linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th international con- ference on computational linguistics, pages 1638- 1649.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Algorithm that Learns What's in a Name",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "Ralph",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "211--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M Bikel, Richard Schwartz, and Ralph M Weischedel. 1999. An Algorithm that Learns What's in a Name. Machine Learning, 34:211-231.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic induction of named entity classes from legal text corpora",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Bourgonje",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Breit",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Khvalchik",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Mireles",
"suffix": ""
},
{
"first": "Julian",
"middle": [
"Moreno"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Revenko",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Rehm",
"suffix": ""
}
],
"year": 2003,
"venue": "ASLD 2020 -advances in semantics and linked data: Joint workshop proceedings from ISWC 2020. International workshop on artificial intelligence for legal documents (AI4LEGAL-2020)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Bourgonje, Anna Breit, Maria Khvalchik, Victor Mireles, Julian Moreno Schneider, Artem Revenko, and Georg Rehm. 2020. Automatic induction of named entity classes from legal text corpora. In ASLD 2020 -advances in semantics and linked data: Joint workshop proceedings from ISWC 2020. In- ternational workshop on artificial intelligence for legal documents (AI4LEGAL-2020), november 2-3, Athens/Virtual, greece, pages 1-11. CEUR Work- shop Proceedings.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos",
"authors": [
{
"first": "Ilias",
"middle": [],
"last": "Chalkidis",
"suffix": ""
},
{
"first": "Manos",
"middle": [],
"last": "Fergadiotis",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "2898--2904",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.261"
]
},
"num": null,
"urls": [],
"raw_text": "Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The Muppets straight out of Law School. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2898- 2904, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Explicit Bias",
"authors": [
{
"first": "Jessica",
"middle": [
"A"
],
"last": "Clarke",
"suffix": ""
}
],
"year": 2018,
"venue": "Northwestern University Law Review",
"volume": "113",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jessica A. Clarke. 2018. Explicit Bias. Northwestern University Law Review, 113:505.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised Cross-lingual Representation Learning at Scale",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic Extraction and Linking of Person. Recherche dInformation Assistee parOrdinateur",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Dozier",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Haschart",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Dozier and Robert Haschart. 2000. Auto- matic Extraction and Linking of Person. Recherche dInformation Assistee parOrdinateur, page 18.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Named entity recognition and resolution in legal text",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Dozier",
"suffix": ""
},
{
"first": "Ravikumar",
"middle": [],
"last": "Kondadadi",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Light",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Vachher",
"suffix": ""
},
{
"first": "Sriharsha",
"middle": [],
"last": "Veeramachaneni",
"suffix": ""
},
{
"first": "Ramdev",
"middle": [],
"last": "Wudali",
"suffix": ""
}
],
"year": 2010,
"venue": "Semantic processing of legal texts",
"volume": "",
"issue": "",
"pages": "27--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Dozier, Ravikumar Kondadadi, Marc Light, Arun Vachher, Sriharsha Veeramachaneni, and Ramdev Wudali. 2010. Named entity recogni- tion and resolution in legal text. In Semantic pro- cessing of legal texts, pages 27-43. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Assessing and Minimizing the Impact of OCR Quality on Named Entity Recognition",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Hamdi",
"suffix": ""
},
{
"first": "Axel",
"middle": [],
"last": "Jean-Caurant",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Sid\u00e8re",
"suffix": ""
},
{
"first": "Micka\u00ebl",
"middle": [],
"last": "Coustaty",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Doucet",
"suffix": ""
}
],
"year": 2020,
"venue": "Digital Libraries for Open Knowledge",
"volume": "",
"issue": "",
"pages": "87--101",
"other_ids": {
"DOI": [
"10.1007/978-3-030-54956-5_7"
]
},
"num": null,
"urls": [],
"raw_text": "Ahmed Hamdi, Axel Jean-Caurant, Nicolas Sid\u00e8re, Micka\u00ebl Coustaty, and Antoine Doucet. 2020. As- sessing and Minimizing the Impact of OCR Qual- ity on Named Entity Recognition. In Digital Li- braries for Open Knowledge, Lecture Notes in Com- puter Science, pages 87-101, Cham. Springer Inter- national Publishing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bidirectional LSTM-CRF Models for Sequence Tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991[cs].ArXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF Models for Sequence Tagging. arXiv:1508.01991 [cs]. ArXiv: 1508.01991.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fine-Grained Named Entity Recognition in Legal Documents",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Rehm",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Moreno-Schneider",
"suffix": ""
}
],
"year": 2019,
"venue": "Semantic Systems. The Power of AI and Knowledge Graphs",
"volume": "11702",
"issue": "",
"pages": "272--287",
"other_ids": {
"DOI": [
"10.1007/978-3-030-33220-4_20"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Leitner, Georg Rehm, and Julian Moreno- Schneider. 2019. Fine-Grained Named En- tity Recognition in Legal Documents. In Maribel Acosta, Philippe Cudr\u00e9-Mauroux, Maria Maleshkova, Tassilo Pellegrini, Harald Sack, and York Sure-Vetter, editors, Semantic Systems. The Power of AI and Knowledge Graphs, volume 11702, pages 272-287. Springer International Publishing, Cham. Series Title: Lecture Notes in Computer Sci- ence.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Dataset of German Legal Documents for Named Entity Recognition",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Rehm",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Moreno-Schneider",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4478--4485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Leitner, Georg Rehm, and Julian Moreno- Schneider. 2020. A Dataset of German Legal Doc- uments for Named Entity Recognition. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 4478-4485, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Survey of Named Entity Recognition and Classification",
"authors": [
{
"first": "David",
"middle": [],
"last": "Nadeau",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2007,
"venue": "Lingvisticae Investigationes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1075/li.30.1.03nad"
]
},
"num": null,
"urls": [],
"raw_text": "David Nadeau and Satoshi Sekine. 2007. A Sur- vey of Named Entity Recognition and Classification. Lingvisticae Investigationes, 30.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An unsupervised approach to discovering and disambiguating social media profiles",
"authors": [
{
"first": "T",
"middle": [],
"last": "Carlton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Northern",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Michael L Nelson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of mining data semantics workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlton T Northern and Michael L Nelson. 2011. An unsupervised approach to discovering and disam- biguating social media profiles. In Proceedings of mining data semantics workshop.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The double metaphone search algorithm. C/C++ Users Journal",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Philips",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "18",
"issue": "",
"pages": "38--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Philips. 2000. The double metaphone search algorithm. C/C++ Users Journal, 18(6):38-43.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The truth of the F-measure",
"authors": [
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yutaka Sasaki. 2007. The truth of the F-measure. Teach Tutor Mater.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "FLERT: Document-Level Features for Named Entity Recognition",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Schweter",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.06993[cs].ArXiv:2011.06993"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Schweter and Alan Akbik. 2021. FLERT: Document-Level Features for Named Entity Recog- nition. arXiv:2011.06993 [cs]. ArXiv: 2011.06993.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The World's First Computerized Criminal-Justice Information-Sharing System -The New York State Identification and Intelligence System (NYSIIS)",
"authors": [
{
"first": "Jeffrey",
"middle": [
"M"
],
"last": "Silbert",
"suffix": ""
}
],
"year": 1970,
"venue": "Criminology",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey M. Silbert. 1970. The World's First Computer- ized Criminal-Justice Information-Sharing System - The New York State Identification and Intelligence System (NYSIIS). Criminology, 8:107.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A Comparison and Analysis of Name Matching Algorithms",
"authors": [
{
"first": "Chakkrit",
"middle": [],
"last": "Snae",
"suffix": ""
}
],
"year": 2007,
"venue": "World Academy of Science, Engineering and Technology",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chakkrit Snae. 2007. A Comparison and Analysis of Name Matching Algorithms. World Academy of Sci- ence, Engineering and Technology, 25:6.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Introduction to the CoNLL-2003 shared task: language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003",
"volume": "4",
"issue": "",
"pages": "142--147",
"other_ids": {
"DOI": [
"10.3115/1119176.1119195"
]
},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 -Volume 4, CONLL '03, pages 142-147, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention Is All You Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. Proceedings of the International Con- ference on Neural Information Processing Systems (NeurIPS).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multi-Domain Named Entity Recognition with Genre-Aware and Agnostic Inference",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Preotiuc-Pietro",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8476--8488",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.750"
]
},
"num": null,
"urls": [],
"raw_text": "Jing Wang, Mayank Kulkarni, and Daniel Preotiuc- Pietro. 2020. Multi-Domain Named Entity Recogni- tion with Genre-Aware and Agnostic Inference. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8476- 8488, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Abbreviations for English given names. Retrieved on 30",
"authors": [
{
"first": "",
"middle": [],
"last": "Wikitionary",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wikitionary. 2021. Abbreviations for English given names. Retrieved on 30 June 2021 from https://en.wiktionary.org/wiki.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Hideaki",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1601.01343[cs].ArXiv:1601.01343"
]
},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint Learning of the Em- bedding of Words and Entities for Named Entity Disambiguation. arXiv:1601.01343 [cs]. ArXiv: 1601.01343.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Figure 1: Data pipeline for ensemble model",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Distribution of worst-case potentially misspelled names over time",
"num": null
},
"TABREF0": {
"text": "Validation results for HCL-NER model. though FLERT is tantalizing close, it is not sufficient if it cannot parse this relatively simple example.",
"content": "<table><tr><td>Tag</td><td colspan=\"2\">Precision Recall</td><td>F 1</td></tr><tr><td>LOC</td><td>0.9399</td><td colspan=\"2\">0.9434 0.9416</td></tr><tr><td>ORG</td><td>0.9014</td><td colspan=\"2\">0.8983 0.8998</td></tr><tr><td>PER</td><td>0.9600</td><td colspan=\"2\">0.9667 0.9634</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "State transitions for ensemble model, one token at a time.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "Top 10 states by percent of lawyers using initials in 1900 vs. 2010",
"content": "<table><tr><td/><td colspan=\"2\">% Names</td></tr><tr><td>State</td><td colspan=\"2\">1900 2010</td></tr><tr><td colspan=\"2\">Massachusetts 79.0</td><td>0.1</td></tr><tr><td>Indiana</td><td>56.3</td><td>0.3</td></tr><tr><td>Maine</td><td>54.5</td><td>0.2</td></tr><tr><td>Georgia</td><td>48.1</td><td>0.4</td></tr><tr><td>Kansas</td><td>46.0</td><td>0.7</td></tr><tr><td>Arkansas</td><td>45.0</td><td>0.7</td></tr><tr><td>New Mexico</td><td>44.4</td><td>0.3</td></tr><tr><td>Vermont</td><td>44.0</td><td>0.3</td></tr><tr><td>Arizona</td><td>43.5</td><td>0.4</td></tr><tr><td>Wyoming</td><td>43.3</td><td>0.2</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "",
"content": "<table><tr><td colspan=\"5\">: Worst 10 states from 1900 to 2010 for using</td></tr><tr><td colspan=\"5\">ambiguous initials shown as a percent of total initials</td></tr><tr><td>used.</td><td/><td/><td/><td/></tr><tr><td/><td/><td>Ambg.</td><td/><td>%</td></tr><tr><td>State</td><td>Year</td><td colspan=\"3\">init. Total Total</td></tr><tr><td>Oregon</td><td>1940</td><td>2</td><td>1992</td><td>0.1</td></tr><tr><td>Indiana</td><td>1900</td><td>8</td><td>8763</td><td>0.1</td></tr><tr><td>Louisiana</td><td>1930</td><td>5</td><td>5612</td><td>0.1</td></tr><tr><td>Miss.</td><td>1940</td><td>2</td><td>2496</td><td>0.1</td></tr><tr><td colspan=\"2\">N. Carolina 1910</td><td>3</td><td>3882</td><td>0.1</td></tr><tr><td>Conn.</td><td>1900</td><td>1</td><td>1374</td><td>0.1</td></tr><tr><td>Oregon</td><td>1930</td><td>2</td><td>2797</td><td>0.1</td></tr><tr><td>Montana</td><td>1940</td><td>1</td><td>1399</td><td>0.1</td></tr><tr><td>Utah</td><td>1920</td><td>1</td><td>1542</td><td>0.1</td></tr><tr><td>Texas</td><td>1930</td><td colspan=\"2\">9 14979</td><td>0.1</td></tr><tr><td colspan=\"5\">fact use the same initials as lawyers who practiced</td></tr><tr><td colspan=\"5\">before, despite being different people. To mitigate</td></tr><tr><td colspan=\"5\">this problem, the data was divided into states and</td></tr><tr><td colspan=\"5\">then decades. Thus the comparison was made for</td></tr><tr><td colspan=\"5\">every state and every decade. The results for the</td></tr><tr><td colspan=\"5\">10 worst state/decade combinations are shown in</td></tr><tr><td>table 4.</td><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"text": "Simple accuracy comparison for the various model tested",
"content": "<table><tr><td>Method</td><td colspan=\"3\">Errors Total Accuracy</td></tr><tr><td>FLERT</td><td>154</td><td>490</td><td>68%</td></tr><tr><td>HCL-NER</td><td>132</td><td>490</td><td>73%</td></tr><tr><td>Ensemble</td><td/><td>490</td><td>78%</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"text": "Confusion matrices for various models tested",
"content": "<table><tr><td>Method</td><td colspan=\"3\">TP FP FN Prec. Rec.</td></tr><tr><td>FLERT</td><td colspan=\"2\">622 81 159</td><td>0.88 0.80</td></tr><tr><td colspan=\"3\">HCL-NER 661 64 120</td><td>0.91 0.84</td></tr><tr><td colspan=\"2\">Ensemble 692 49</td><td>89</td><td>0.93 0.88</td></tr><tr><td colspan=\"4\">Columns are calculated as: TP, True positives; FP,</td></tr><tr><td colspan=\"4\">False positives; FN, False negatives; Prec., Precision:</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}