ACL-OCL / Base_JSON /prefixN /json /nllp /2021.nllp-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:47:08.081364Z"
},
"title": "Learning from Limited Labels for Long Legal Dialogue",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Hong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "jennyhong@cs.stanford.edu"
},
{
"first": "Derek",
"middle": [],
"last": "Chong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "derekch@stanford.edu"
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "manning@cs.stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study attempting to achieve high accuracy information extraction of case factors from a challenging dataset of parole hearings, which, compared to other legal NLP datasets, has longer texts, with fewer labels. On this corpus, existing work directly applying pretrained neural models has failed to extract all but a few relatively basic items with little improvement over rule-based extraction. We address two challenges posed by existing work: training on long documents and reasoning over complex speech patterns. We use a similar approach to the two-step open-domain question answering approach by using a Reducer to extract relevant text segments and a Producer to generate both extractive answers and non-extractive classifications. In a context like ours, with limited labeled data, we show that a superior approach for strong performance within limited development time is to use a combination of a rule-based Reducer and a neural Producer. We study four representative tasks from the parole dataset. On all four, we improve extraction from the previous benchmark of 0.41-0.63 to 0.83-0.89 F1.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We study attempting to achieve high accuracy information extraction of case factors from a challenging dataset of parole hearings, which, compared to other legal NLP datasets, has longer texts, with fewer labels. On this corpus, existing work directly applying pretrained neural models has failed to extract all but a few relatively basic items with little improvement over rule-based extraction. We address two challenges posed by existing work: training on long documents and reasoning over complex speech patterns. We use a similar approach to the two-step open-domain question answering approach by using a Reducer to extract relevant text segments and a Producer to generate both extractive answers and non-extractive classifications. In a context like ours, with limited labeled data, we show that a superior approach for strong performance within limited development time is to use a combination of a rule-based Reducer and a neural Producer. We study four representative tasks from the parole dataset. On all four, we improve extraction from the previous benchmark of 0.41-0.63 to 0.83-0.89 F1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In many judicial processes such as legal hearings and criminal trials, decisions are made as a result of lengthy dialogues, in which case factors are discussed in great detail. To study such dialogues, scholars typically invest immense effort to hand label a small number of transcripts with some case factors; the factors are then used in downstream analysis. In most cases, the sheer length of transcribed conversational text all but prohibits any large-scale analysis of the process. Information extraction over dialogues can assist in identifying the underlying factors of a case from transcripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The benefits of information extraction are twofold. Automating the extraction of case factors means that a historical legal analysis can now be comprehensive, containing all available transcripts, rather than being limited to the several dozen or hundred transcripts that a single researcher can label by hand. The second advantage is to open the door to counterdata applications in law (D'ignazio and Klein, 2020) . To date, most machine learning applications in the law have been predictive: given case factors up front, make a prediction of an outcome. In domains where case factors cannot or should not be known prior to the hearing, information extraction can produce case factors after a hearing, which enables machine learning to play an alternative role to the role of prediction, the role of oversight (Bell et al., 2021) . In our application, information extraction allows the public to audit the parole process, whose case records are otherwise locked away in a filing cabinet.",
"cite_spans": [
{
"start": 402,
"end": 414,
"text": "Klein, 2020)",
"ref_id": null
},
{
"start": 811,
"end": 830,
"text": "(Bell et al., 2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To be useful for such downstream research, the consensus in legal domain NLP is that information extraction should produce labels that achieve an F1 of at least 0.80 (Hendrycks et al., 2021; . Our corpus, a set of historical California parole hearings, is a particularly difficult application, but also representative of many challenges in criminal law: (1) Parole hearings are longer than documents in existing benchmarks. (2) Existing benchmarks source from written text; parole documents are loosely-structured dialogue. (3) Existing benchmarks contain at least an order of magnitude more labels. (4) Information extraction from formal written documents centers around named entities and relation extraction. By contrast, much of the text in the criminal context serves the purpose of surfacing, discussing, and correcting case factors, which are not necessarily relational. This means parole hearings pose both extractive and abstractive tasks, often across multiple sentences, which is known to be challenging even in more structured settings (Wang et al., 2021) .",
"cite_spans": [
{
"start": 166,
"end": 190,
"text": "(Hendrycks et al., 2021;",
"ref_id": "BIBREF11"
},
{
"start": 1048,
"end": 1067,
"text": "(Wang et al., 2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The scarcity of labels and specificity of the domain suggest that subject matter experts (SMEs) can be helpful. On the parole corpus, weak supervision-based data programming approaches (Ratner et al., 2016; Zheng et al., 2019) achieve F1 scores of only 0.41-0.63 . We propose an alternative way to involve SMEs, in which we split the problem into two components: a Reducer model which extracts relevant text segments from a hearing, and a Producer model which generates answers from the text segments selected by the Reducer. Our methods effectively achieve extraction at 0.83-0.89 F1.",
"cite_spans": [
{
"start": 185,
"end": 206,
"text": "(Ratner et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 207,
"end": 226,
"text": "Zheng et al., 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that using an approach with a rulebased Reducer and neural Producer outperforms other commonly-used approaches. Focusing SME effort on developing rules for the Reducer is thus more time-efficient than requiring SMEs to provide additional target labels, whether manually or via data programming. With quality text segments, a neural Producer model can be effectively fine-tuned on just one thousand labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A review of data programming literature suggests that semi-supervised techniques might be a good fit for our problem space. Several existing pipelines combine a limited amount of training data, rulebased systems and neural models to achieve strong results on benchmark datasets (Maheshwari et al., 2020) and in various medical fields (Ling et al., 2019; Smit et al., 2020; Dai et al., 2021) . By comparison, weak supervision-based data programming methods tend to focus on bootstrapping in the absence of data (Ratner et al., 2017 (Ratner et al., , 2018 , which is a nontrivial performance constraint.",
"cite_spans": [
{
"start": 278,
"end": 303,
"text": "(Maheshwari et al., 2020)",
"ref_id": null
},
{
"start": 334,
"end": 353,
"text": "(Ling et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 354,
"end": 372,
"text": "Smit et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 373,
"end": 390,
"text": "Dai et al., 2021)",
"ref_id": "BIBREF6"
},
{
"start": 510,
"end": 530,
"text": "(Ratner et al., 2017",
"ref_id": "BIBREF26"
},
{
"start": 531,
"end": 553,
"text": "(Ratner et al., , 2018",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Regardless of supervision strength, an architecture based on rule-based systems may be useful for generating \"candidates\" as input to downstream neural models; Zhang et al. (2019) explores the time efficiency of manual labeling compared with rule-writing (via regular expressions) for named entity recognition (NER), where results are compared over a bidirectional LSTM-based classifier, finding that in most circumstances, a combination of rulebased and machine-learning classifiers optimizes human time investment.",
"cite_spans": [
{
"start": 160,
"end": 179,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We therefore adopt the approach of using a rulebased system for candidate generation. One new challenges with our corpus is that parole hearings generally center around one individual, so the candidates for downstream models are not named entities, but more loosely defined segments of the hear-ing. Compared to NER, there is less prior work exploring rule-based methods for more general retrieval and segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our goal of achieving 0.80 F1 in an abstractive format is currently beyond the capabilities of stateof-the-art (SOTA) neural models on comparable tasks, only one of which is in the legal domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On Natural Questions (NQ; , SOTA models achieve F1 scores of 0.79 and 0.64 on its long and short answer tasks, respectively. However, NQ is purely extractive and averages only 7,300 words per input. On the Doc2EDAG financial statements dataset (Zheng et al., 2019) , the Graph-based Interaction model with a Tracker (Xu et al., 2021) surpasses 0.80 F1 when extracting events from documents averaging 912 tokens in length, but this SOTA result drops to 0.76 F1 in the longest quartile. On Open-Domain Question Answering, the SOTA Dense Passage Retrieval (Karpukhin et al., 2020) has an extractive top-5 accuracy of just 0.66. For downstream applications, a model must have a robust top-1 accuracy.",
"cite_spans": [
{
"start": 244,
"end": 264,
"text": "(Zheng et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 316,
"end": 333,
"text": "(Xu et al., 2021)",
"ref_id": "BIBREF35"
},
{
"start": 553,
"end": 577,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The closest comparable legal dataset is the Contract Understanding Atticus Dataset (CUAD) (Hendrycks et al., 2021) . Over CUAD, a SOTA model like RoBERTa (Liu et al., 2019 ) achieves a lower, and extractive, question answering performance of 0.80 recall at 0.31 precision, representing an F1 score of only 0.45, with documents still averaging one-quarter the length of parole transcripts.",
"cite_spans": [
{
"start": 90,
"end": 114,
"text": "(Hendrycks et al., 2021)",
"ref_id": "BIBREF11"
},
{
"start": 154,
"end": 171,
"text": "(Liu et al., 2019",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We have obtained a corpus of 35,105 parole hearing transcripts, averaging 18,499 words each from 2007-2019. 1 Each hearing is a dialogue, primarily between one or more commissioners and the parole candidate. Most case factors are embellished with history and context, which is important for the procedure of a parole hearing, but challenging for information extraction. identified eleven fields for information extraction. We study the four fields that the previous study failed to extract with near 0.80 F1: job_offer (whether the parole candidate has a job offer upon release), edu_level (the candidate's educational level), risk_assess (a psychological assessment score), and last_writeup (the date of the candidate's last disciplinary writeup in prison). Figure 1 shows examples of how these four features arise in dialogue. On average, each annotator takes forty minutes to label a transcript. Only 3% of the dataset is labeled: job_offer, edu_level, and risk_assess each have 1,173 training examples and 106 validation examples, whereas last_writeup has 563 and 48, respectively. The corpus also includes 218 transcripts with labeled spans, i.e. the sentences from which the correct label was determined.",
"cite_spans": [],
"ref_spans": [
{
"start": 759,
"end": 767,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "COMM: Dr. [REDACT], R-E-D-A-C-T,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We use a Reducer-Producer paradigm ( Figure 2 ) in the spirit of the Document Retriever-Reader model used in open-domain question answering (ODQA; Chen et al., 2017; Das et al., 2019) , with two differences: (1) The Reducer selects one or more relevant passages from within a single document (Clark and Gardner, 2017; Krishna et al., 2021) , and (2) the Producer model is not necessarily a QA model. We use separate Reducers and Producers for each field. Prior applications of data programming to this corpus used SMEs to write noisy labels for training a neural model; it does not significantly reduce the input text into shorter segments and instead relies on an end-to-end neural approach . By contrast, our approach uses SMEs to focus on the smaller task of reducing input text and relies on only gold labels, however few, for training the neural model. One subproblem is designed to be tractable for an SME (the Reducer), and the other for a pretrained language model (the Producer).",
"cite_spans": [
{
"start": 147,
"end": 165,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 166,
"end": 183,
"text": "Das et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 292,
"end": 317,
"text": "(Clark and Gardner, 2017;",
"ref_id": "BIBREF4"
},
{
"start": 318,
"end": 339,
"text": "Krishna et al., 2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "The SME (1) encodes keywords and patterns into programmatic rules (Zhang et al., 2019) , and (2) evaluates the rules against silver-standard metrics. The SME examines any errors and repeats the process until the development subset is covered to >95% recall on silver metrics.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reducer",
"sec_num": "4.1"
},
{
"text": "Rules. The SME uses keywords to generate candidate segments and candidate substrings (e.g., for risk assessments, \"low\" is interesting, but only if it occurs in the proximity of \"risk\"), sequenced in order of increasing breadth and decreasing precision (Zhang et al., 2019) . The framework provides highlevel functions that enable SMEs to easily operate on pipelines of candidate segments, filtering in or out, splitting, deoverlapping, and limiting results to create a high-quality reduced output passage.",
"cite_spans": [
{
"start": 253,
"end": 273,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reducer",
"sec_num": "4.1"
},
{
"text": "Evaluation. We reserve the 218 transcripts with labeled spans to serve as a held-out evaluation set. For intermediate SME evaluation and iterations, we use three silver-standard evaluations as a proxy for true Reducer performance: (a) the percentage of results with empty outputs, (b) whether true labels (and common synonyms) appear within reduced passages, and (c) performing interim Producer fine-tuning runs, and evaluating end-to-end performance across a set of hyperparameter sweeps. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducer",
"sec_num": "4.1"
},
{
"text": "We write several simple rule-based Producers to build an understanding of the problem space, and then fine-tune pretrained language models on the passages returned by the Reducer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Producer",
"sec_num": "4.2"
},
{
"text": "Choice of language model. To ensure high training efficacy, we identify the smallest language model that meets the required benchmark in the general case. We evaluate a range of models' capabilities on a small task: For each of the four fields, we identify ten transcripts with particularly challenging dialogue (see Appendix C for examples). We manually extract passages from each transcript and benchmark each language model on its average zero-shot classification accuracy on all 40 passages, across 25 random seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Producer",
"sec_num": "4.2"
},
{
"text": "Choice of prediction heads. Fields with a small, fixed set of values are a good fit for a classification head (CLS), such as edu_level which is grouped into four categories, and risk_assess, for which a psychologist ascribes one of five possible risk levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Producer",
"sec_num": "4.2"
},
{
"text": "Fields with an open-ended set of values may be more suited to the masked language model (MLM) (Hermann et al., 2015; Hill et al., 2016; Chen, 2018; or question answering (QA) approach, e.g., last_writeup can be any year from 1960-2020. The MLM and QA heads require a user-defined prompt, which are not always natural for all fields. For example, for job_offer, we prompt MLM with token choices, e.g. \"Commissioner: As to whether you have a job offer lined up: You have [one / none].\"). For last_writeup, where the correct year exists within each passage, we try various prompts, such as \"Your last writeup was in [MASK]\". We chose prompts with good fine-tuning performance on training data, e.g. for last_writeup, we use the prompt \"Ignoring chronos and 128s, your most recent 115, RVR (rule violation report) occurred in: [MASK]\"). QA requires a question formulation, for some fields, we augment QA heads with a prefix sentence containing tokens representing all of the current field's possible classes, a technique used in QA benchmarks such as CoQA (Reddy et al., 2019) and BoolQ (Clark et al., 2019) , which enables extractive models to always return values from desired classes.",
"cite_spans": [
{
"start": 94,
"end": 116,
"text": "(Hermann et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 117,
"end": 135,
"text": "Hill et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 136,
"end": 147,
"text": "Chen, 2018;",
"ref_id": "BIBREF2"
},
{
"start": 1052,
"end": 1072,
"text": "(Reddy et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 1083,
"end": 1103,
"text": "(Clark et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Producer",
"sec_num": "4.2"
},
{
"text": "We tried using a multiple choice reading comprehension (MRC) head (Richardson et al., 2013; Lai et al., 2017; Chen, 2018) , which proved to be an an elegant way of grounding the model, with similarities to contrastive learning, and able to generate dynamic classification options, e.g. unlike year classification, MRC choices are only the year that appear in the passage. However, MRC requires a full backpropagation across the entire model for each option of every question, which is memoryintensive for passages where over a dozen options might exist per question, and unnecessarily slow even with gradient accumulation. We do not include MRC in our results.",
"cite_spans": [
{
"start": 66,
"end": 91,
"text": "(Richardson et al., 2013;",
"ref_id": "BIBREF30"
},
{
"start": 92,
"end": 109,
"text": "Lai et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 110,
"end": 121,
"text": "Chen, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Producer",
"sec_num": "4.2"
},
{
"text": "Training details. We use base models from the HuggingFace Transformers library (Wolf et al., 2020) , applying standard hyperparameter ranges (Sun et al., 2019) and techniques for training BERTbased models, such as the use of a slanted triangular learning rate. However, we set batch size to 1 and use gradient accumulation to simulate a larger batch size, in order to allow Reducer outputs to be as large as possible (approximately 1,500 tokens for RoBERTA + BigBird Base on a 16GB GPU) without affecting training performance. We ran hyperparameter sweeps for approximately six hours per field on a NVIDIA Tesla V100 GPU. ",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 141,
"end": 159,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Producer",
"sec_num": "4.2"
},
{
"text": "Our methods achieve the 0.80 F1 benchmark 23 for all four fields, as shown in Table 1 . One rule-based Producer achieved an F1 of 0.83 for risk_assess, which narrowly outperformed RoBERTa + BigBird model performance of 0.81 F1. However, all other rule-based Producer attempts fell near or below the \"Previous F1\" mark on their tasks. The risk_assess task lends itself to rule-writing, because its values are restricted to combinations of \"low,\" \"moderate,\" and \"high\", and there are a few phrasings that are commonly used (e.g., \"Overall, your risk was low to moderate\"). By comparison, neural models may have been confused by the multiple other types of psychological assessments that occur in the text (e.g., PCL-R, HCR-20, LS/CMI), which are all assessed on the same \"low,\" \"moderate,\" and \"high\" scale. Table 2 shows the Reducer's performance on three different measures of recall on the 218 labeled spans. We focus on recall because a Producer can still perform well on a short input even if there are occasional spurious phrases. Also, correct answers are not necessarily unique; labeled spans often point to a single sentence, whereas a fact may 2 F1 scores are calculated on exact match for all prediction heads, instead of the relatively easier bag-of-words metric used in the extractive setting, or precision at 0.80 recall (Hendrycks et al., 2021) . This is a more accurate measurement of abstractive performance, which is essential to downstream results.",
"cite_spans": [
{
"start": 1334,
"end": 1358,
"text": "(Hendrycks et al., 2021)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 807,
"end": 814,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "3 Related existing work reports F1, but F1 is an imperfect proxy for the impact of errors for downstream analyses. Any application that seeks to use extracted data should perform its own analysis to understand the relative costs of, for example, false positives versus false negatives for a given field. be repeated multiple times during the course of a hearing. The Reducer may select a correct span, but not the exact sentence selected by the annotator. Recall sidesteps the former issue and slightly mitigates the incorrect penalty imposed by the latter, as similar words may be used in both spans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standalone Reducer Performance",
"sec_num": "5.1"
},
{
"text": "The Rouge-L recall ranges from 0.85-0.92: the Reducer frequently finds the exact set of sentences annotated by a human labeler. The Rouge-2 recall is lower, from 0.72-0.82: when the Reducer fails to find the exact sentences, the phrasing of its result is different. However, the bag-of-words recall is still high: 0.88-0.95, which means that the Reducer tends to finds sentences that use almost the same words, if not in the exact same order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Given the span labeling issue described above, Table 2 is almost certainly an underestimate of Reducer performance. This is supported by other assessments of Reducer performance: end-to-end F1 scores of 0.83-0.89 are effectively a guarantee on the lower bound of Reducer performance, and based on the error analysis in Section 5.4, only a small fraction of errors were due to the Reducer. This implies significantly higher true recall scores. This is also in line with our silver-standard Reducer evaluations, which are consistently above 0.95.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Benchmark performance for each language model is provided in Table 3 . Figure 3 plots model performance against size and shows power-law scaling characteristics, a known feature of neural language models (Kaplan et al., 2020) .",
"cite_spans": [
{
"start": 204,
"end": 225,
"text": "(Kaplan et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 71,
"end": 79,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Language Model Benchmarks",
"sec_num": "5.2"
},
{
"text": "Given the relatively small range in performance between the models in our evaluation set (7.5% across all model families and variants), we also run some supplementary tests, finding that (a) models pretrained on question answering datasets performed 10-15% better in this setting, but a comprehensive evaluation was not feasible as QA outputs are extractive and require manual assessment, and (b) large GPT models performed dramatically better in the few-shot setting, with GPT3 performing at 90-100% accuracy on some problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Benchmarks",
"sec_num": "5.2"
},
{
"text": "We ultimately use RoBERTa + BigBird Base (RoB + BB; Zaheer et al., 2020) as our default model due to its balance of long input length, low computation requirements, and performance. This model supports inputs of up to 4,096 tokens, allowing the Reducer to provide multiple candidate passages without having to split input into multiple model calls and integrate a la Clark and Gardner (2017) . It is in the smallest size class of the models tested, facilitating the fine-tuning of large input passages within GPU memory limits. Within its size class, RoB + BB is the second-best performer, performing within 2-3% of models 2-3x its size. Compared to the top performer (GPT2), BERT is known to have better versatility on downstream tasks (Klein and Nabi, 2019) and well-explored fine-tuning characteristics (Sun et al., 2019) .",
"cite_spans": [
{
"start": 367,
"end": 391,
"text": "Clark and Gardner (2017)",
"ref_id": "BIBREF4"
},
{
"start": 737,
"end": 759,
"text": "(Klein and Nabi, 2019)",
"ref_id": "BIBREF17"
},
{
"start": 806,
"end": 824,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Benchmarks",
"sec_num": "5.2"
},
{
"text": "We evaluate the gains from prediction head choice by performing 25 fine-tuning runs for each combination of field and head and reporting the highest validation F1 score achieved for each. To ensure test fairness within a reasonable amount of computation, each run uses a random configuration from Table 4 . F1 scores are recorded at the point where validation loss is at a minimum. Table 5 shows the performance of each prediction head on each field. edu_level and job_offer performed comparably to the main runs in Table 1 . last_writeup performed best under a question answering head during this exercise, but underperformed the masked language model F1 score of 0.84 in Table 1 , leaving this result ambiguous. Selecting a suitable prediction head dramatically affects model performance after fine-tuning: suboptimal head choices result in F1 scores of 52-93% of the scores achieved with the best prediction head.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 382,
"end": 389,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 516,
"end": 523,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 673,
"end": 680,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Prediction Heads",
"sec_num": "5.3"
},
{
"text": "The CLS prediction head performs well across all fields except last_writeup, where only 20% of all runs score above 0.25, and most score below 0.10. Classification is not a natural format for this field: in order to classify a passage, the model must learn 50 separate classes, one for each possible year from 1969-2019. CLS performs well when the number of classes is relatively low, especially when the answer is abstractive. However, it tends to fail to understand factual relationships. For example, when used for risk assessment its ratings correlate with the number of times the word \"gang\" or \"murder\" occurs in the passage (see Appendix D for more information).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Heads",
"sec_num": "5.3"
},
{
"text": "The MLM head has nearly the opposite performance characteristics: it performs best on last_writeup, at an average level on job_offer, and very poorly on edu_level. It is telling that last_writeup can be expressed as a sentence with a single masked token (which may hold many values), whereas the classes of the latter are all concepts which do not fit into a single token. The MLM head's F1 scores tend to be several points lower than its accuracy, a symptom of the model occasionally filling the mask with arbitrary freeform values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Heads",
"sec_num": "5.3"
},
{
"text": "The QA heads perform well on job_offer, fairly well on last_writeup, and at an average level on edu_level. The first field is easily expressed as in the form of a yes/no question, and the second field's value is extractable from within the passage as with a regular QA task. However, the third requires the model to parse the passage to locate the answer, classify this into into one of four fixed phrasings, and return this phrasing from the prefix sentence, a task which is somewhat foreign to a question answering-based model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Heads",
"sec_num": "5.3"
},
{
"text": "Errors fall into a few clear classes. Approximately 70% of all errors result from what appears to be the model learning spurious associations with cooccurring words. For example, in one conversational turn, a parole candidate describes both his own and the victim's level of education. The Producer incorrectly returns the victim's level of education, which uses the phrase \"college courses.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.4"
},
{
"text": "Around 10% of errors result from complex passages (comparable to the examples in Appendix C), which continue to challenge language models. Spoken narrative language can be arbitrarily complex, and grounding in real world knowledge and presuppositions remain hard to encode. In one transcript, the commissioner asks, \"Are you working towards a college degree?\" which presupposes that the parole candidate completed high school. However, the model classifies this candidate as not having completed high school or a GED, as the transcript does not explicitly mention either. Some passages require numerical abilities which smaller language models tend to find difficult (Dua et al., 2019) . Table 3 suggests that a larger language model may improve performance in many cases.",
"cite_spans": [
{
"start": 667,
"end": 685,
"text": "(Dua et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 688,
"end": 695,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.4"
},
{
"text": "In the remaining 20% of errors, the Reducer failed to find a match for a given transcript or returned an incorrect passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.4"
},
{
"text": "Surprisingly, we found that in 15-50% of the total errors returned (varying by field), the model was actually correct, and had identified incorrectlylabeled or ambiguous data. To be conservative, we did not adjust F1 scores upwards and instead excluded the examples from this error analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.4"
},
{
"text": "A detailed breakdown of errors for edu_level is provided in Appendix D for illustrative purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.4"
},
{
"text": "Previous approaches to our problem use rulegenerated labels to supervise a model. We instead split the problem into two, where the Reducer is entirely rule-based, and the Producer trains only on the few, but high quality, human labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Rules and Neural Models",
"sec_num": "6.1"
},
{
"text": "Both rule-generated labels and a rule-based Reducer scale with the number of features to extract, but not the complexity of model or dataset. However, given a fixed development time, we find it more valuable for an SME to focus on only the Reducer. In contrast, end-to-end data programming requires rules for the Producer as well, which can be much more challenging to write. On our data, it takes about ten hours for an SME to write Reducer rules for a model that performs at the exceptional recall rates from Table 2 . report the same number of hours per feature for an endto-end data programming model, which performs much worse overall.",
"cite_spans": [],
"ref_spans": [
{
"start": 511,
"end": 518,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Combining Rules and Neural Models",
"sec_num": "6.1"
},
{
"text": "As future work, we hope to investigate whether a well-designed Reducer can improve human performance in creating gold-standard labels, saving time by reducing the need to read through entire transcripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Rules and Neural Models",
"sec_num": "6.1"
},
{
"text": "We find that an hand-written rules can effectively isolate key segments of text in the overwhelming majority of situations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Human-in-the-Loop",
"sec_num": "6.2"
},
{
"text": "The tradeoff of incurring the cost of writing rules per each additional feature proved to be very reasonable for our domain. We have few features, and our requirements demand accuracy over speed. In comparison, prior work suggests that for a neural model to achieve accuracy in the same ballpark, the model would require an order of magnitude more spans, which would be a prohibitive cost. In the general case, when applying our architecture, the per-feature cost of SME time should be considered against (a) the potential per-example savings from reducing labeling requirements, and (b) the performance requirements of the problem space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Human-in-the-Loop",
"sec_num": "6.2"
},
{
"text": "The Human-in-the-Loop (HITL) approach enables SMEs to exert a positive influence on the quality of both the final model and the dataset. Given a probable baseline label error rate of a few percentage points (Alt et al., 2020; Reiss et al., 2020; Northcutt et al., 2021) , as the Reducer's recall increases towards the 0.9 level, many of the mismatches against silver-standard Reducer evaluations and fine-tuning errors will actually be labeling errors. For example, in a case study where we checked last_writeup Reducer outputs against a silver-standard evaluation, we found that over 80% of \"errors\" were actually errors in human labeling. This also provides opportunities for SMEs to apply domain knowledge to more subtle classes of data issues, such as where Reducer rules surface mislabelings caused by labeler confusion.",
"cite_spans": [
{
"start": 207,
"end": 225,
"text": "(Alt et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 226,
"end": 245,
"text": "Reiss et al., 2020;",
"ref_id": "BIBREF29"
},
{
"start": 246,
"end": 269,
"text": "Northcutt et al., 2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Human-in-the-Loop",
"sec_num": "6.2"
},
{
"text": "As such, a unique advantage to HITL over a neural-only model is improving data quality during the training process. Purely neural models are forced to learn from mislabeled data points, which destabilizes benchmarks and damages model performance. (Northcutt et al., 2021) By comparison, we frequently detect label errors prior to finetuning, and as errors tend to occur in patches (such as under a particular labeler or a particular time period) we can quickly make corrections or exclude large bad patches from the training dataset. This can significantly increase training performance: excluding a patch of bad labels resulted in a 0.2 F1 improvement in one case. Appendix B elaborates on the data quality improvement process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Human-in-the-Loop",
"sec_num": "6.2"
},
{
"text": "The Reducer-Producer architecture is useful for enabling iterative, componentwise development. Components may be improved in isolation as requirements arise, such as improving Reducer coverage or upgrading Producer language models, heads or prompts, and sometimes may be entirely replaced without any impact to their counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modular Architecture",
"sec_num": "6.3"
},
{
"text": "In particular, we hope to leave the door open for a general neural Reducer and Producer, allowing downstream users to perform open-ended querying and exploration of the dataset. This architecture enables future work to continue to use our Producer models, which are already trained. The information bottleneck between its components allows for rigorous measurement of the quality of Reducer output, which enables each component to be trained separately. Additionally, using present models to generate silver-standard data labels may alleviate issues of label scarcity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modular Architecture",
"sec_num": "6.3"
},
{
"text": "Our corpus of parole hearings poses the challenge of information extraction with few gold labels: one thousand labels is not enough to locate and identify the answer in a long document. Parole, like many other applications, requires domain-specific knowledge, which raises the question of how best to incorporate the labor of subject matter experts to assist neural models in making optimal use of available labels, in order to achieve high performance on extraction tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We identified two problems with existing work on the parole dataset, which fell short of the 0.80 F1 on many tasks: (1) Text segments remained too long for many SOTA neural models to digest, and contained many spurious signals. (2) Question answering was a useful first approach to handle a wide range of different feature types. However, out-of-the-box, it was rarely the best way to handle each individual feature type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We present an approach that uses an SMEdesigned rule-based Reducer to identify relevant text segments, and a neural Producer to generate labels using those text segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We argue that it is time-efficient and performant for human SMEs to write mostly keyword-based rules for finding relevant parts of a parole transcript. In a parole transcript, a field of interest might be discussed in practically infinite different ways, but is usually somewhat well-defined by a limited set of words and patterns that are almost always used (for example, \"GED\", \"college courses\", \"did not graduate\" for a parole candidate's level of education). These keywords are relatively easy for a human to identify and write combinations of regular expressions to identify. However, training a neural model to recognize the phrases over the course of 20,000-word documents requires at least an order of magnitude more labels than are available (Hendrycks et al., 2021) . Therefore, we focus SME energy on the Reducer, and only the Reducer. For the Producer model, the role of human and machine are reversed. When the text is shortened to a sufficiently succinct context, neural models can be successfully fine-tuned to extract labels at an F1 of 0.80. It is practically impossible for a human to write rules to interpret every possible phrasing of, for example, someone's educational journey. However, pretrained language models excel at producing labels from small, targeted pieces of text. The 1,000 available labels are sufficient for good performance on this task (Zhang et al., 2020) . We use a base model that can handle relatively long tokens. We also explore a range of different fine-tuning heads.",
"cite_spans": [
{
"start": 752,
"end": 776,
"text": "(Hendrycks et al., 2021)",
"ref_id": "BIBREF11"
},
{
"start": 1376,
"end": 1396,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our architecture shows the effectiveness of a modular, two-step approach, where not every module needs to be a neural or machine learning model. Such efforts to involve subject matter experts are especially important in applications that require substantial domain expertise. We hope that this work encourages additional research to better understand other legal processes whose workings are yet opaque to the public.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "SMEs write Reducers for each field by composing pipelines of high-level operations, as described in Table 6 . Operations run on an input transcript or a list of text segments, and emit matches which are compiled into a final output passage.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "A Reducer Operations and Rules",
"sec_num": null
},
{
"text": "Extracts a list of segments from a raw transcript which match one or more regular expressions (regexes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Segments",
"sec_num": null
},
{
"text": "Input Transcript text with any preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Segments",
"sec_num": null
},
{
"text": "Accepts a list of regexes and searches the transcript separately for each item, returning matches in the same left-toright order they are found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regex",
"sec_num": null
},
{
"text": "Length of segment returned around each match. Filter & Split Filters a list of segments against two lists of regexes, to return two lists of matching and non-matching segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limit",
"sec_num": null
},
{
"text": "Regex Accepts a \"filter in\" regex list which segments must match, and a \"filter out\" regex list which segments must not match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limit",
"sec_num": null
},
{
"text": "Saves segments from a given list to a specified list for future compilation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emit Matches",
"sec_num": null
},
{
"text": "Limit Length of segment to store around each match, and maximum segments to store. Deduplicate Ensures a list of segments is free of duplicate or overlapping text ranges. Merges segments with partial overlaps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emit Matches",
"sec_num": null
},
{
"text": "Merges a list of segments into a single text passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compile Passage",
"sec_num": null
},
{
"text": "Separator String inserted between each segment. Limit Trims passage to a maximum length. To illustrate these operations in use, the pipeline for job_offer is provided in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compile Passage",
"sec_num": null
},
{
"text": "Mismatches on silver-standard Reducer evaluations were often a product of real label errors: the datasets examined in Northcutt et al. (2021) had a 3.4% error rate on average, which is a similar order of magnitude to label errors encountered in our dataset when performing detailed manual verification. The parole dataset includes records that span over more than a decade, and labeling has occurred in several waves over the years. As such, the semantic meanings of labels includes subtle shifts and inconsistencies. For example, a blank label might mean any one of the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Improving Data Quality using Silver-Standard Evaluations",
"sec_num": null
},
{
"text": "\u2022 the annotator was uncertain, \u2022 the transcript is unclear, \u2022 the transcript is clear but the situation itself is ambiguous, \u2022 \"none\" is a reasonable answer in this situation (such as last_writeup for a candidate with zero writeups), \u2022 the feature was not applicable in this situation (such as job_offer for a candidate who is not working age); or \u2022 the feature was simply not fully annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Improving Data Quality using Silver-Standard Evaluations",
"sec_num": null
},
{
"text": "To address these issues, we: (a) write code to correct issues where this is possible, (b) drop entire sections of low-quality train labels where patterns of errors exist, (c) hand-correct validation labels and keep track of all manual corrections, and (d) write small data transforms to simplify the job of the Producer (e.g., fixing common spelling and transcription errors). Table 8 provides examples of the complex, challenging passages selected to benchmark language models in section 5.2, trimmed for brevity and redacted as per the conventions described within Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 8",
"ref_id": "TABREF15"
},
{
"start": 567,
"end": 575,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "B Improving Data Quality using Silver-Standard Evaluations",
"sec_num": null
},
{
"text": "This section provides a detailed breakdown of the error analysis for a single field and data split (edu_level, Validation), in order to illustrate typical patterns of errors encountered in our finetuned models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Supplemental Error Analysis: edu_level",
"sec_num": null
},
{
"text": "This field was fine-tuned with a classification (CLS) prediction head, and correctly classified 89/106 of its labeled examples. Its 17 incorrectlyclassified examples are examined in Table 9 Yes. No, no, no, no, no, no 1996, two in 1997, four in 1998, two in 1999, three in 2001, 2002, 2004, 2005 , there was a pair. And then 2008, disobeying a direct order was your final 115. What was the 2005, knowingly providing a false claim? ",
"cite_spans": [
{
"start": 190,
"end": 217,
"text": "Yes. No, no, no, no, no, no",
"ref_id": null
},
{
"start": 218,
"end": 295,
"text": "1996, two in 1997, four in 1998, two in 1999, three in 2001, 2002, 2004, 2005",
"ref_id": null
}
],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Supplemental Error Analysis: edu_level",
"sec_num": null
},
{
"text": "Transcripts may be requested from the California Department of Corrections and Rehabilitation under the California Public Records Act.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Description is challenging to interpret: \"I started ditching school and hanging out when I was in high school. I think part of the reason for that was because we never had anything at home, everything was always, seemed like we're always struggling for everything, you know. Our electric bill, I didn't want to keep living like that, so I left, I left when I was 13 years old.\" Table 9 : Example-level error assessments: edu_level.",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 385,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tacred revisited: A thorough evaluation of the tacred relation extraction task",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Alt",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Gabryszak",
"suffix": ""
},
{
"first": "Leonhard",
"middle": [],
"last": "Hennig",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14855"
]
},
"num": null,
"urls": [],
"raw_text": "Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. Tacred revisited: A thorough eval- uation of the tacred relation extraction task. arXiv preprint arXiv:2004.14855.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Recon Approach: A new direction for machine learning in criminal law",
"authors": [
{
"first": "Kristen",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Catalin",
"middle": [],
"last": "Voss",
"suffix": ""
}
],
"year": 2021,
"venue": "Berkeley Technology Law Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristen Bell, Jenny Hong, Nick McKeown, and Catalin Voss. 2021. The Recon Approach: A new direction for machine learning in criminal law. Berkeley Tech- nology Law Journal, 37.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural Reading Comprehension and Beyond",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen. 2018. Neural Reading Comprehension and Beyond. Ph.D. thesis, Stanford University.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading Wikipedia to answer opendomain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1870--1879",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1171"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Simple and effective multi-paragraph reading comprehension",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.10723"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehen- sion. arXiv:1710.10723.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BoolQ: Exploring the surprising difficulty of natural yes/no questions",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2924--2936",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1300"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924-2936, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bonebert: A bert-based automated information extraction system of radiology reports for bone fracture detection and diagnosis",
"authors": [
{
"first": "Zhihao",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lianghao",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "263--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhihao Dai, Zhong Li, and Lianghao Han. 2021. Bonebert: A bert-based automated information ex- traction system of radiology reports for bone frac- ture detection and diagnosis. In IDA, pages 263- 274.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multi-step retrieverreader interaction for scalable open-domain question answering",
"authors": [
{
"first": "Rajarshi",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Shehzaad",
"middle": [],
"last": "Dhuliawala",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.05733"
]
},
"num": null,
"urls": [],
"raw_text": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever- reader interaction for scalable open-domain question answering. arXiv preprint arXiv:1905.05733.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "2020. Data feminism",
"authors": [
{
"first": "D",
"middle": [],
"last": "Catherine",
"suffix": ""
},
{
"first": "Lauren",
"middle": [
"F"
],
"last": "Klein",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine D'ignazio and Lauren F Klein. 2020. Data feminism. MIT press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs",
"authors": [
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.00161"
]
},
"num": null,
"urls": [],
"raw_text": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark re- quiring discrete reasoning over paragraphs. arXiv preprint arXiv:1903.00161.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "CUAD: An expert-annotated NLP dataset for legal contract review",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Burns",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Ball",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.06268"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. 2021. CUAD: An expert-annotated NLP dataset for legal contract review. arXiv preprint arXiv:2103.06268.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The goldilocks principle: Reading children's books with explicit memory representations",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children's books with explicit memory representa- tions. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Challenges for information extraction from dialogue in criminal law",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Catalin",
"middle": [],
"last": "Voss",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 1st Workshop on NLP for Positive Impact",
"volume": "",
"issue": "",
"pages": "71--81",
"other_ids": {
"DOI": [
"10.18653/v1/2021.nlp4posimpact-1.8"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Hong, Catalin Voss, and Christopher Manning. 2021. Challenges for information extraction from dialogue in criminal law. In Proceedings of the 1st Workshop on NLP for Positive Impact, pages 71-81, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "and Dario Amodei. 2020. Scaling laws for neural language models",
"authors": [
{
"first": "Jared",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Mccandlish",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Henighan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Chess",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "Alec",
"middle": [],
"last": "Gray",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.08361"
]
},
"num": null,
"urls": [],
"raw_text": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6769--6781",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.550"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning to answer by learning to ask",
"authors": [
{
"first": "Tassilo",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Moin",
"middle": [],
"last": "Nabi",
"suffix": ""
}
],
"year": 2019,
"venue": "Getting the best of gpt-2 and bert worlds",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02365"
]
},
"num": null,
"urls": [],
"raw_text": "Tassilo Klein and Moin Nabi. 2019. Learning to an- swer by learning to ask: Getting the best of gpt-2 and bert worlds. arXiv:1911.02365.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Hurdles to progress in long-form question answering",
"authors": [
{
"first": "Kalpesh",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Aurko",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.06332"
]
},
"num": null,
"urls": [],
"raw_text": "Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to progress in long-form question answer- ing. arXiv:2103.06332.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Natural questions: a benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "RACE: Large-scale ReAding comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAd- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A semi-supervised machine learning approach to detecting recurrent metastatic breast cancer cases using linked cancer registry and electronic medical record data",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Albee",
"suffix": ""
},
{
"first": "Allison",
"middle": [
"W"
],
"last": "Ling",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"L"
],
"last": "Kurian",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Caswell-Jin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sledge",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [
"R"
],
"last": "Shah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tamang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.05958"
]
},
"num": null,
"urls": [],
"raw_text": "Albee Y Ling, Allison W Kurian, Jennifer L Caswell- Jin, George W Sledge Jr, Nigam H Shah, and Suzanne R Tamang. 2019. A semi-supervised machine learning approach to detecting recurrent metastatic breast cancer cases using linked cancer registry and electronic medical record data. arXiv preprint arXiv:1901.05958.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Rishabh Iyer, and Ganesh Ramakrishnan. 2020. Data programming using semisupervision and subset selection",
"authors": [
{
"first": "Ayush",
"middle": [],
"last": "Maheshwari",
"suffix": ""
},
{
"first": "Oishik",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Krishnateja",
"middle": [],
"last": "Killamsetty",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.09887"
]
},
"num": null,
"urls": [],
"raw_text": "Ayush Maheshwari, Oishik Chatterjee, KrishnaTeja Killamsetty, Rishabh Iyer, and Ganesh Ramakr- ishnan. 2020. Data programming using semi- supervision and subset selection. arXiv preprint arXiv:2008.09887.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Pervasive label errors in test sets destabilize machine learning benchmarks",
"authors": [
{
"first": "Anish",
"middle": [],
"last": "Curtis G Northcutt",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Athalye",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mueller",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.14749"
]
},
"num": null,
"urls": [],
"raw_text": "Curtis G Northcutt, Anish Athalye, and Jonas Mueller. 2021. Pervasive label errors in test sets destabi- lize machine learning benchmarks. arXiv preprint arXiv:2103.14749.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Snorkel metal: Weak supervision for multi-task learning",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Goldman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Ratner, Braden Hancock, Jared Dunnmon, Roger Goldman, and Christopher R\u00e9. 2018. Snorkel metal: Weak supervision for multi-task learning. In Pro- ceedings of the Second Workshop on Data Manage- ment for End-To-End Machine Learning, pages 1-4.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Snorkel: Rapid training data creation with weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases",
"volume": "11",
"issue": "",
"pages": "269--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2017. Snorkel: Rapid training data creation with weak su- pervision. Proceedings of the VLDB Endowment. In- ternational Conference on Very Large Data Bases, 11(3):269-282.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Data programming: Creating large training sets, quickly. Advances in Neural Information Processing Systems",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Selsam",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "29",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data program- ming: Creating large training sets, quickly. Ad- vances in Neural Information Processing Systems, 29:3567.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "CoQA: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "249--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Identifying incorrect labels in the conll-2003 corpus",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Reiss",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Cutler",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Muthuraman",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Eichenberger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 24th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "215--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick Reiss, Hong Xu, Bryan Cutler, Karthik Muthuraman, and Zachary Eichenberger. 2020. Identifying incorrect labels in the conll-2003 corpus. In Proceedings of the 24th Conference on Computa- tional Natural Language Learning, pages 215-226.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "MCTest: A challenge dataset for the open-domain machine comprehension of text",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Renshaw",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 193-203, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Chexbert: combining automatic labelers and expert annotations for accurate radiology report labeling using bert",
"authors": [
{
"first": "Akshay",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Saahil",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"P"
],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lungren",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09167"
]
},
"num": null,
"urls": [],
"raw_text": "Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pa- reek, Andrew Y Ng, and Matthew P Lungren. 2020. Chexbert: combining automatic labelers and expert annotations for accurate radiology report labeling us- ing bert. arXiv preprint arXiv:2004.09167.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "How to fine-tune bert for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "China National Conference on Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "194--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Tdjee: A document-level joint model for financial event extraction",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhenkai",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Ruilong",
"middle": [],
"last": "Cui",
"suffix": ""
}
],
"year": 2021,
"venue": "Electronics",
"volume": "10",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Wang, Zhenkai Deng, and Ruilong Cui. 2021. Tdjee: A document-level joint model for financial event extraction. Electronics, 10(7):824.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Document-level event extraction via heterogeneous graph-based interaction model with a tracker",
"authors": [
{
"first": "Runxin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "3533--3546",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.274"
]
},
"num": null,
"urls": [],
"raw_text": "Runxin Xu, Tianyu Liu, Lei Li, and Baobao Chang. 2021. Document-level event extraction via heteroge- neous graph-based interaction model with a tracker. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 3533-3546, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Big bird: Transformers for longer sequences",
"authors": [
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Guru",
"middle": [],
"last": "Guruganesh",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Kumar Avinava Dubey",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Ainslie",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Ontanon",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Qifan",
"middle": [],
"last": "Ravula",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amr",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ahmed",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "17283--17297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago On- tanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Trans- formers for longer sequences. In Advances in Neural Information Processing Systems, volume 33, pages 17283-17297. Curran Associates, Inc.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "11328--11339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In In- ternational Conference on Machine Learning, pages 11328-11339. PMLR.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "How to invest my time: Lessons from human-in-the-loop entity extraction",
"authors": [
{
"first": "Shanshan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Dragut",
"suffix": ""
},
{
"first": "Slobodan",
"middle": [],
"last": "Vucetic",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "2305--2313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shanshan Zhang, Lihong He, Eduard Dragut, and Slo- bodan Vucetic. 2019. How to invest my time: Lessons from human-in-the-loop entity extraction. In Proceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, pages 2305-2313.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Doc2EDAG: An end-to-end document-level framework for Chinese financial event extraction",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "337--346",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1032"
]
},
"num": null,
"urls": [],
"raw_text": "Shun Zheng, Wei Cao, Wei Xu, and Jiang Bian. 2019. Doc2EDAG: An end-to-end document-level frame- work for Chinese financial event extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 337- 346, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Example passages of the four features we study. The speaker COMM refers to the presiding commissioner, and the speaker CAND refers to the parole candidate."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performance on benchmark fromTable 5.2 versus model size."
},
"TABREF1": {
"num": null,
"text": "Producer architecture sketch for the last_writeup field. The Reducer is entirely rule-based, with a few high-level operations over various regular expressions. The Producer is entirely neural and builds on a pretrained language model.",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Reducer Component</td><td/><td/><td/><td>Producer Component</td><td/></tr><tr><td/><td>Extract Segments</td><td>Filter &amp; Split</td><td/><td/><td colspan=\"2\">Prediction Head: QA</td></tr><tr><td/><td>* 115 *</td><td>Contains year no. and \"most recent\"</td><td/><td/><td/><td/></tr><tr><td/><td>matches \u00b1 50 chars</td><td>limit 2 passages</td><td/><td/><td colspan=\"2\">Language Model</td></tr><tr><td>Hearing Transcript</td><td>(More Operations) ... ...</td><td>(More Operations) ... ...</td><td>Deduplicate limit 6,500 chars Merge overlaps</td><td>Passage Extracts</td><td>Tokenizer</td><td>Output Label</td></tr><tr><td/><td>Extract Segments</td><td>Filter &amp; Split</td><td/><td/><td/><td/></tr><tr><td/><td>* disciplinary * or * written up * matches \u00b1 250 chars</td><td>Contains \"recent\" or \"your last\" limit 3 passages</td><td/><td/><td>Framing Prompt</td><td>Custom Question</td></tr><tr><td colspan=\"2\">Figure 2: Reducer-</td><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF3": {
"num": null,
"text": "Overall results. Previous best results are from. RoB + BB = RoBERTa + BigBird.",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">RL-R R2-R BoW-R</td></tr><tr><td>risk_assess</td><td>0.85</td><td>0.76</td><td>0.88</td></tr><tr><td>last_writeup</td><td>0.87</td><td>0.76</td><td>0.91</td></tr><tr><td>edu_level</td><td>0.92</td><td>0.82</td><td>0.95</td></tr><tr><td>job_offer</td><td>0.87</td><td>0.72</td><td>0.92</td></tr></table>"
},
"TABREF4": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Evaluating Reducers on labeled spans: Rouge-</td></tr><tr><td>L and Rouge-2 Recall, Bag-of-Words Recall.</td></tr></table>"
},
"TABREF6": {
"num": null,
"text": "Zero-shot language model performance (average classification accuracy) on a benchmark of complex, challenging passages, over 25 random seeds.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF8": {
"num": null,
"text": "Hyperparameter sweep configurations for prediction head selection exercise.",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">CLS MLM QA</td></tr><tr><td colspan=\"2\">last_writeup 0.76</td><td>0.79</td><td>0.82</td></tr><tr><td>edu_level</td><td>0.82</td><td>0.43</td><td>0.70</td></tr><tr><td>job_offer</td><td>0.83</td><td>0.69</td><td>0.89</td></tr></table>"
},
"TABREF9": {
"num": null,
"text": "The effects of different prediction heads on Validation F1 scores (results in italics are not definitive, as MLM outperforms QA on end-to-end evaluation).",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF10": {
"num": null,
"text": "Overview of Reducer operations.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF11": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>#</td><td colspan=\"2\">Param. Value(s)</td></tr><tr><td colspan=\"3\">01 Extract Segments</td></tr><tr><td/><td>Input</td><td>Transcript (lowercase)</td></tr><tr><td/><td>Regex</td><td>job offer</td></tr><tr><td/><td>Limit</td><td>1,000 chars centered on each match</td></tr><tr><td colspan=\"3\">02 Filter &amp; Split</td></tr><tr><td/><td>Input</td><td>Operation 01</td></tr><tr><td/><td>Regex</td><td>letter</td></tr><tr><td colspan=\"3\">03 Emit Matches</td></tr><tr><td/><td>Input</td><td>Operation 02: Matches only</td></tr><tr><td/><td>Limit</td><td>2 segments</td></tr><tr><td/><td>Effect</td><td>Emits 2x1,000-char segments which</td></tr><tr><td/><td/><td>mention \"job offer\" in proximity to \"let-</td></tr><tr><td/><td/><td>ter\".</td></tr><tr><td colspan=\"3\">04 Emit Matches</td></tr><tr><td/><td>Input</td><td>Operation 02: Non-matches only</td></tr><tr><td/><td>Limit</td><td>2 segments, 500 chars centered on each</td></tr><tr><td/><td/><td>match</td></tr><tr><td/><td>Effect</td><td>Emits 2x500-char segments which men-</td></tr><tr><td/><td/><td>tion \"job offer\" but not \"letter\".</td></tr><tr><td/><td/><td>(Continued overleaf)</td></tr></table>"
},
"TABREF12": {
"num": null,
"text": "Reducer pipeline for job_offer.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF13": {
"num": null,
"text": ". The four possible values this field may hold are: \u2022 NA: Did not finish high school \u2022 HS: Completed high school or GED \u2022 SC: Some college classes \u2022 GC: Graduated from college With respect to violence risk assessment conclusions [...] the doctor uses a number of measurements. One is the PCL, which is the psychopathy checklist, and states that, \"Overall score placed Mr. [REDACT] in the moderate range of psychopathy. [...]\" Historically, on the HCR checklist, HCR20, the doctor writes, \"[...] he has risk factors that place him in the low moderate risk range for future violence [...] The inmate's overall LS/CMI score indicates that he is in the medium category.\" And then the doctor goes on to discuss the historical domain and concludes, \"[...] the inmate presents a moderate risk for future violence. [...] In the clinical or more current and dynamic domain of risk assessment [...] the inmate presents a moderate risk of future violence. As for the management of future risk domain [...] the inmate presents as a low risk of future violence. Overall then, risk assessment estimates suggests that the inmate poses a low moderate likelihood to become involved in a violent offense if released to the free community.\" edu_level COMM: Okay. So, and at the last hearing, it was discussed and I don't want to get -Well, that's parole plans. We're not going to talk about that right now. But, so you've taken a number of courses. It looks like in 2013, 2014, General Studies. Are you working towards a college degree? CAND: No. We're not able to take a college degree where I'm at. COMM: You say you've taken World War II, Europe Civilization, Ecology. Are these television courses or -CAND: They're videotapes, CDs. job_offer COMM: Do you have any job offers if you were to get a parole date? CAND: Uh, I used to be a mechanic before in, uh, [REDACT], my not in a company, but uh, in uh, a little shop with my friends.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Field</td><td>Passages</td></tr><tr><td>risk_assess</td><td>COMM:</td></tr></table>"
},
"TABREF14": {
"num": null,
"text": ". Not as a plumber. But, uh, I got, uh, as a mechanic I got offer with my cousin. COMM: Okay. Yeah. But he's in the United States, right? CAND: No, he's in [REDACT]. last_writeup COMM: You've had 19 115s, starting in 1996, and most of these have been covered in prior hearings but, sort of running through them, couple in",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF15": {
"num": null,
"text": "Examples of complex, challenging passages.",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}