ACL-OCL / Base_JSON /prefixN /json /nllp /2021.nllp-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:49.283252Z"
},
"title": "Effectively Leveraging BERT for Legal Document Classification",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation Redmond",
"location": {
"region": "WA"
}
},
"email": "nut.limsopatham@microsoft.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Bidirectional Encoder Representations from Transformers (BERT) has achieved state-ofthe-art performances on several text classification tasks, such as GLUE and sentiment analysis. Recent work in the legal domain started to use BERT on tasks, such as legal judgement prediction and violation prediction. A common practise in using BERT is to fine-tune a pre-trained model on a target task and truncate the input texts to the size of the BERT input (e.g. at most 512 tokens). However, due to the unique characteristics of legal documents, it is not clear how to effectively adapt BERT in the legal domain. In this work, we investigate how to deal with long documents, and how is the importance of pre-training on documents from the same domain as the target task. We conduct experiments on the two recent datasets: ECHR Violation Dataset and the Overruling Task Dataset, which are multi-label and binary classification tasks, respectively. Importantly, on average the number of tokens in a document from the ECHR Violation Dataset is more than 1,600. While the documents in the Overruling Task Dataset are shorter (the maximum number of tokens is 204). We thoroughly compare several techniques for adapting BERT on long documents and compare different models pretrained on the legal and other domains. Our experimental results show that we need to explicitly adapt BERT to handle long documents, as the truncation leads to less effective performance. We also found that pre-training on the documents that are similar to the target task would result in more effective performance on several scenario.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Bidirectional Encoder Representations from Transformers (BERT) has achieved state-ofthe-art performances on several text classification tasks, such as GLUE and sentiment analysis. Recent work in the legal domain started to use BERT on tasks, such as legal judgement prediction and violation prediction. A common practise in using BERT is to fine-tune a pre-trained model on a target task and truncate the input texts to the size of the BERT input (e.g. at most 512 tokens). However, due to the unique characteristics of legal documents, it is not clear how to effectively adapt BERT in the legal domain. In this work, we investigate how to deal with long documents, and how is the importance of pre-training on documents from the same domain as the target task. We conduct experiments on the two recent datasets: ECHR Violation Dataset and the Overruling Task Dataset, which are multi-label and binary classification tasks, respectively. Importantly, on average the number of tokens in a document from the ECHR Violation Dataset is more than 1,600. While the documents in the Overruling Task Dataset are shorter (the maximum number of tokens is 204). We thoroughly compare several techniques for adapting BERT on long documents and compare different models pretrained on the legal and other domains. Our experimental results show that we need to explicitly adapt BERT to handle long documents, as the truncation leads to less effective performance. We also found that pre-training on the documents that are similar to the target task would result in more effective performance on several scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advances in deep learning contribute to effective performances of several natural language processing (NLP) tasks on legal text documents, such as, violation prediction (Chalkidis et al., 2020) , overruling prediction (Zheng et al., 2021) , legal judgement prediction (Chalkidis et al., 2019) , legal information extraction (Chalkidis et al., 2018) , and court opinion generation (Ye et al., 2018) .",
"cite_spans": [
{
"start": 195,
"end": 200,
"text": "2020)",
"ref_id": "BIBREF4"
},
{
"start": 225,
"end": 245,
"text": "(Zheng et al., 2021)",
"ref_id": "BIBREF21"
},
{
"start": 275,
"end": 299,
"text": "(Chalkidis et al., 2019)",
"ref_id": null
},
{
"start": 331,
"end": 355,
"text": "(Chalkidis et al., 2018)",
"ref_id": null
},
{
"start": 387,
"end": 404,
"text": "(Ye et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) has gained attentions from the NLP community due to its effectiveness on several NLP tasks (Chalkidis et al., 2019 (Chalkidis et al., , 2020 Zheng et al., 2021) . Importantly, the effectiveness of BERT is mainly due to the transfer learning ability that leverages semantic and syntactic knowledge from pre-training on a large non-labeled corpus (Devlin et al., 2019; Chalkidis et al., 2020; Zheng et al., 2021) . However, Chalkidis et al. (2019) reported that BERT could not effectively handle long documents in the European Court of Human Rights (ECHR) dataset. In addition, pre-training BERT is costly. We need access to a special type of machines to pre-train BERT on a large corpora (Devlin et al., 2019; Liu et al., 2019; Zheng et al., 2021) .",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 176,
"end": 199,
"text": "(Chalkidis et al., 2019",
"ref_id": null
},
{
"start": 200,
"end": 225,
"text": "(Chalkidis et al., , 2020",
"ref_id": null
},
{
"start": 226,
"end": 245,
"text": "Zheng et al., 2021)",
"ref_id": "BIBREF21"
},
{
"start": 430,
"end": 451,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 452,
"end": 475,
"text": "Chalkidis et al., 2020;",
"ref_id": null
},
{
"start": 476,
"end": 495,
"text": "Zheng et al., 2021)",
"ref_id": "BIBREF21"
},
{
"start": 498,
"end": 530,
"text": "However, Chalkidis et al. (2019)",
"ref_id": null
},
{
"start": 772,
"end": 793,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 794,
"end": 811,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 812,
"end": 831,
"text": "Zheng et al., 2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we investigate how to effectively adapt BERT to handle long documents, and how importance of pre-training on in-domain documents. Specifically, we will focus on two legal document prediction tasks, including ECHR Violation Dataset (Chalkidis et al., 2021) and Overruling Task Dataset (Zheng et al., 2021) . The ECHR Violation Dataset provides a multi-label classification task. Given a list of facts described in free-texts, the task is to identify which articles of the European Convention were violated. The Overruling Task is a binary classification task to predict whether a legal statement will be later overruled by the same or higher ranking court (Zheng et al., 2021) . We will discuss more about the two tasks in Section 4.2.",
"cite_spans": [
{
"start": 237,
"end": 269,
"text": "Dataset (Chalkidis et al., 2021)",
"ref_id": null
},
{
"start": 298,
"end": 318,
"text": "(Zheng et al., 2021)",
"ref_id": "BIBREF21"
},
{
"start": 669,
"end": 689,
"text": "(Zheng et al., 2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are threefold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We investigate how to effectively adapt BERT to deal with long documents (i.e. documents containing more than 512 tokens). 2. We analyse the impacts of pre-training on different types of documents, especially in-domain documents, on the performance of a fine-tuned BERT model. 3. We thoroughly evaluate the approaches to adapt BERT on long documents and pretrained models to identify best practises for using BERT in legal document classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organised as follows. Section 2 further discusses related work and positions our work in the literature. Section 3 describes the two research questions we aim to answer in this paper and how we will find the answers. Sections 4 and 5 discuss our experimental setup and results. Section 6 provides more insight from the experimental results and answers the two research questions. Finally, we provide concluding remarks in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Legal documents, such as EU & UK legislation, European Court of Human Rights (ECHR) cases, Case Holdings On Legal Decisions (CaseHOLD) are normally written in a descriptive language in a non-structured text format and have unique characteristics that are different from those of other domains. In order to advance Legal NLP research, several tasks and datasets have been developed, including violation prediction on the ECHR dataset (Chalkidis et al., 2020), court overruling (Zheng et al., 2021) , legal docket classification (Nallapati and Manning, 2008) and court view generation (Ye et al., 2018) . In this work, we focus on text classification, which is a main research area of legal NLP.",
"cite_spans": [
{
"start": 476,
"end": 496,
"text": "(Zheng et al., 2021)",
"ref_id": "BIBREF21"
},
{
"start": 527,
"end": 556,
"text": "(Nallapati and Manning, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 583,
"end": 600,
"text": "(Ye et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Bidirectional Encoder Representations from Transformers (BERT) is a language representation model that is optimized during pre-training by selftraining using a masked language model prediction and a next sentence prediction as a joint training objective (Devlin et al., 2019) . As shown in Figure 1 , BERT model architecture is built upon a multilayer bidirectional Transformer encoder of Vaswani et al. (2017) , where the number of input tokens is limited to 512. Pre-training BERT enables effective transfer learning from a large dataset before finetuning the model on a specific task (Devlin et al., 2019; Vaswani et al., 2017) . Importantly, Devlin et al. (2019) used this transfer learning method to achieve the state-of-the-art performance on several NLP datasets, such as GLUE (Wang et al., 2018) , SQuAD (Rajpurkar et al., 2016) , Concept Nor-malisation Collier, 2015, 2016) and Novel Named Entity Recognition (Derczynski et al., 2017) . In particular, when fine-tuning BERT, we normally add a classification layer (either Soft-Max or Sigmoid) on the C (or CLS) representation in BERT output layer, in order to compute the prediction probabilities as in Figure 1 . In the legal domain, Zheng et al. (2021) found that pre-training BERT on legal documents before fine-tuning on particular tasks lead to a better performance than pre-training BERT on general documents. However, Chalkidis et al. 2019found that BERT did not perform well on the violation prediction task due to the length of the documents that are mostly longer than 512 tokens. They dealt with the long legal documents by using a hierarchical BERT technique (Chalkidis et al., 2019). Difference from the previous work, we investigate the effectiveness of variances of pre-trained BERT-based models and compare several methods to handle the long legal documents in legal text classification.",
"cite_spans": [
{
"start": 254,
"end": 275,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 389,
"end": 410,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF17"
},
{
"start": 587,
"end": 608,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 609,
"end": 630,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 646,
"end": 666,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 784,
"end": 803,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 812,
"end": 836,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 862,
"end": 882,
"text": "Collier, 2015, 2016)",
"ref_id": null
},
{
"start": 918,
"end": 943,
"text": "(Derczynski et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 1194,
"end": 1213,
"text": "Zheng et al. (2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1162,
"end": 1170,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several attempts (Beltagy et al., 2020; Zaheer et al., 2020; Pappagari et al., 2019) have been made to enable BERT-like models to work on documents with more than 512 tokens. For example, Beltagy et al. 2020and Zaheer et al. 2020used several different attention-mechanism techniques, such as global attentions and sliding window attentions to enable learning on a longer number of tokens. Pappagari et al. (2019) investigated different approaches to apply BERT on sequence chunks of texts in a document before aggregating the features using techniques, such as max pooling and mean pooling. In this work, we adapt these techniques to learn how to effectively use BERT on long legal documents.",
"cite_spans": [
{
"start": 17,
"end": 39,
"text": "(Beltagy et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 40,
"end": 60,
"text": "Zaheer et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 61,
"end": 84,
"text": "Pappagari et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section discusses research questions we aim to answer in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "3"
},
{
"text": "For legal text classification, does pre-training on the in-domain documents lead to a more effective performance than pre-training on general documents?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "To answer the first research question, we compare the performance of variances of BERT-based models that are pre-trained on general documents or different types of legal documents. Examples of the models are BERT (Devlin et al., 2019) and LEGAL-BERT (Chalkidis et al., 2020) . The complete list of models will be described in Section 4.3. We fine-tune the models on the violation prediction and court overruling prediction tasks. We provide detailed information about the tasks in Section 4.2.",
"cite_spans": [
{
"start": 213,
"end": 234,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 239,
"end": 274,
"text": "LEGAL-BERT (Chalkidis et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "RQ2 How to adapt BERT-based models to effectively deal with long documents in legal text classification?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "For RQ2, we discuss the performances of several BERT variances (including truncating long documents from the front or from the back), as well as hierarchical BERT models (Pappagari et al., 2019 ) that learn to combine output vectors of BERT using different strategies, such as, max pooling (Krizhevsky et al., 2012) , and mean pooling (Krizhevsky et al., 2012) before applying a classification layer.",
"cite_spans": [
{
"start": 170,
"end": 193,
"text": "(Pappagari et al., 2019",
"ref_id": "BIBREF14"
},
{
"start": 290,
"end": 315,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF8"
},
{
"start": 335,
"end": 360,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "In Section 3, we have discussed the two main research questions to be investigated in this paper. In this section, we discuss the hyper-parameters of our models in Section 4.1. Then, we provide the details of the two legal text classification datasets (Section 4.2) and the variances of the BERT models (Section 4.3) used in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We use the transformers library 1 to develop and train BERT models in our experiments. For all experiments, we fine-tune the models using AdamW optimizer (Loshchilov and Hutter, 2017) , learning rate of 5e-5 and a linear learning-rate scheduler. We 1 https://huggingface.co/transformers/ used a batch size of 16 and fine-tune the models on individual tasks for 5 epochs 2 .",
"cite_spans": [
{
"start": 154,
"end": 183,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameters",
"sec_num": "4.1"
},
{
"text": "The dataset contains 11k cases from the European Convention of Human Rights public database (Chalkidis et al., 2021). Each case contains a list of paragraphs representing facts in the case. The task is to predict which of the human right articles of the Convention are violated (if any) in a given case. The number of target labels are 40 ECHR articles (Chalkidis et al., 2021). Table 1 provides statistical information of the ECHR Violation (Multi-Label) dataset. In particular, the dataset is separated into 3 folds: training, development and testing with the number of data points (cases) of 9,000, 1,000 and 1,000, respectively. On average, the number of tokens within a case is between 1,619 -1,926, which are more than 512 tokens supported by BERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "ECHR Violation (Multi-Label) Dataset",
"sec_num": "4.2.1"
},
{
"text": "This is a multi-label classification task where we follow Chalkidis et al. 2021and evaluate the classification performance in terms of micro-F1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ECHR Violation (Multi-Label) Dataset",
"sec_num": "4.2.1"
},
{
"text": "This dataset is composes of 2,400 data-points, which are legal statements that are either overruled or not overruled by the same or the higher ranked court (Sulea et al., 2017; Zheng et al., 2021) .",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(Sulea et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 177,
"end": 196,
"text": "Zheng et al., 2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overruling Task Dataset",
"sec_num": "4.2.2"
},
{
"text": "We show the statistics of the Overruling Task Dataset in Table 2 . The average and the maximum number of tokens within a statement (i.e. case) is 21.94 and 204, respectively. Therefore, the BERT model should directly support this dataset without any alteration.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Overruling Task Dataset",
"sec_num": "4.2.2"
},
{
"text": "Follow Zheng et al. (2021) , the task is modeled as a binary classification, where we conduct a 10 folds cross-validation on the dataset. Finally, we report the average of the F1-score across the 10 folds with a standard deviation value (Zheng et al., 2021) .",
"cite_spans": [
{
"start": 7,
"end": 26,
"text": "Zheng et al. (2021)",
"ref_id": "BIBREF21"
},
{
"start": 237,
"end": 257,
"text": "(Zheng et al., 2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overruling Task Dataset",
"sec_num": "4.2.2"
},
{
"text": "Next, we discuss the variances of adapting *Model, which is a pre-trained BERT-based model from \u2022 RR-*Model -Remove tokens in the rear of the input texts if the length is more than 512 and fine-tune the model on each classification task (similar to vanilla BERT (Devlin et al., 2019) ).",
"cite_spans": [
{
"start": 262,
"end": 283,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variances",
"sec_num": "4.3"
},
{
"text": "\u2022 RF-*Model -Remove tokens in the front of the input texts if the length is more than 512 and fine-tune the model on each classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variances",
"sec_num": "4.3"
},
{
"text": "\u2022 MeanPool-*Model -Apply the model on every chunk of n tokens before using a mean function to average the features from the same dimensions of the output vector representations of the chunks. Then, use a classification layer for each classification task. In this work, we set n = 200.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variances",
"sec_num": "4.3"
},
{
"text": "\u2022 MaxPool-*Model -Apply the model on every chunk of n tokens before using a max function to select features from each dimension, based on the highest scores among the same dimensions of the output vector representations of the chunks, as a final vector representation. Then, use a classification layer for each classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variances",
"sec_num": "4.3"
},
{
"text": "In addition, we include other two baselines that use different attention techniques, in order to cope with document longer than 512 tokens:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variances",
"sec_num": "4.3"
},
{
"text": "\u2022 BigBird -Fine-tuning the BigBird from Zaheer et al. (2020) , which was pre-trained using English language corpora, such as BookCorpus and English portion of the CommonCrawl News, on each classification task. BigBird is a variance of BERT that uses several attention techniques, such as, random attention, window attention and global attention, so that it can deal with documents longer than 512 tokens.",
"cite_spans": [
{
"start": 54,
"end": 60,
"text": "(2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variances",
"sec_num": "4.3"
},
{
"text": "\u2022 LongFormer -Fine-tuning the LongFormer from Beltagy et al. (2020) , which was pre-trained using BookCorpus and English Wikipedia, on each classification task. Long-Former is a variance of BERT that uses several attention techniques, such as, sliding window attention, dilated sliding window, and global attention, so that it can handle documents longer than 512 tokens.",
"cite_spans": [
{
"start": 61,
"end": 67,
"text": "(2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variances",
"sec_num": "4.3"
},
{
"text": "In this section, we compare the variances of BERTbased models on the ECHR Violation Dataset (Section 5.1) and Overruling Task Dataset (Section 5.2), respectively. Table 4 reports the performances in terms of Micro-F1 score on different approaches to deal with long legal documents. First, when comparing the performance of different BERT pre-trained models, we found that *-ECHR-Legal-BERT outperformed the other pretrained models across all of the methods used for adapting BERT to deal with long documents. This finding supported that pre-training BERT on the documents that are more similar to the task would lead to a better performance. Please note that *-ECHR-Legal-BERT was pre-training using documents from the ECHR Violation Dataset, as mentioned in Table 3 . Moreover, we observed that *-RoBERTa performs comparably to *-Harvard-Law-BERT. This provides an insight that if the in-domain documents (or documents similar to the task) are limited, pre-training the model on a large corpus could also lead to an effective performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 759,
"end": 766,
"text": "Table 3",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Next, as shown in Table 4 , when comparing the performance of RR-* and RF-* in Table 4 , we found that the micro F-1 scores of RR-* (e.g. 0.6466 for BERT) is worse than those of the corresponding RF-* (e.g. 0.6803 for BERT). This shows",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 79,
"end": 86,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "ECHR Violation Dataset",
"sec_num": "5.1"
},
{
"text": "The BERT (bert-base-uncased) from Devlin et al. (2019) , which were pre-trained using BookCorpus and English Wikipedia. ECHR-Legal-BERT The BERT (bert-base-uncased) from Chalkidis et al. (2020) , which were pre-trained using legal documents including the ECHR dataset. Harvard-Law-BERT The BERT (bert-base-uncased) from Zheng et al. (2021) , which were pre-trained using the entire Harvard Law case corpus.",
"cite_spans": [
{
"start": 34,
"end": 54,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 187,
"end": 193,
"text": "(2020)",
"ref_id": "BIBREF4"
},
{
"start": 320,
"end": 339,
"text": "Zheng et al. (2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": null
},
{
"text": "The RoBERTa (roberta-base) from Liu et al. (2019) , which were pre-trained using English language corpora, such as BookCorpus and English portion of the CommonCrawl News. RoBERTa is a variance of BERT which trains only to optimize the dymamic masking language model. that for this ECHR Violation Dataset, the back sections of the cases are more important than the front sections. Importantly, removing texts at the back of the input as suggested by Devlin et al. (2019) could lead to a poor performance. In addition, the best approach, MaxPool-ECHR-Legal-BERT, achieved 0.7213 Micro F-1 score, which was significantly better than any of the RR-* and RF-*, supported that truncation worsened the classification performance.",
"cite_spans": [
{
"start": 32,
"end": 49,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 449,
"end": 469,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RoBERTa",
"sec_num": null
},
{
"text": "Finally, we observed that BigBird and Long-Former (Micro-F1 score 0.7308 and 0.7238, respectively) outperformed the other baselines that adapted BERT to deal with longer documents. This supported that BigBird and LongFormer that were explicitly designed to with long documents using different variances of attention techniques could lead to a better performance than aggregating results from applying BERT on multiple chunks of text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RoBERTa",
"sec_num": null
},
{
"text": "In this section, we discuss the performances on the Overruling Task Dataset. As discussed in Section 4.2.2, the lengths of the documents in this dataset are shorter than 512 tokens. Therefore, we can directly use BERT without any changes. Table 5 reported the performance in terms of F-1 score averaged across 10 folds cross-validation, along with the standard deviation (STD). From Table 5 , we observed that Harvard-Law-BERT and ECHR-Legal-BERT achieved the best and the 2nd best performances (0.9756 and 0.9725, respectively). This supported the impacts of pretraining on the in-domain documents. Meanwhile, RoBERTa achieved the 3 rank (0.9683 F-1 score) demonstrated that if no in-domain documents available, pre-training on a large corpus could also be effective. These results are inline with the findings in Section 5.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 239,
"end": 246,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 383,
"end": 390,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Overruling Task Dataset",
"sec_num": "5.2"
},
{
"text": "On the other hand, BigBird and LongFormer (0.9570 and 0.9569, respectively) performed a marginally worse than the other approaches. This could be due to the fact that BigBird and Long-Former are explicitly modelled to deal with long documents. Specifically, for shorter documents al- lowing multi-head attentions to freely attend to any tokens would lead to a more effective performance than restricting them to be on particular sliding windows or specific areas (e.g. global attentions or randomized attentions).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overruling Task Dataset",
"sec_num": "5.2"
},
{
"text": "In this section, we provide further discussions on the experimental results from Section 5, in order to answer the research questions posed in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "For legal text classification, does pre-training on the in-domain documents lead to a more effective performance than pre-training on general documents?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "Yes, based on the experiments on both datasets, the model pre-trained on documents in the legal domain (ECHR-Legal-BERT and Harvard-Law-BERT) achieved the highest performance as shown in Tables 4 and 5, respectively. In addition, as discussed in Sections 5.1 and 5.2, RoBERTa achieved competitive performances on both datasets supported that pre-training on a large dataset could be a good option, if in-domain data cannot be obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "RQ2 How to adapt BERT-based models to effectively deal with long documents in legal text classification?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "From the performances of RRand RF-, in Section 5.1, we found that truncating long documents (on either ends) lessened the classification performance due to the lost of data. From the experimental result reported in Table 4 , BigBird and Longformer (even though not pre-trained on in-domain documents) outperformed other approaches that adapted BERT to deal with long documents. This highlighted the importance of explicitly handling long documents during designing the model architecture. Next, both MaxPool-* and MeanPool-* achieved F-1 performances that are markedly better than the other approaches. Therefore, it is the most desirable to use BigBird or Longformer that were explicitly designed to deal with long legal documents. An alternative method but less effective is to apply BERT on chunks of n tokens before using appropriate function (e.g. max or mean) to aggregate the vector representation across all the chunks before applying a classification layer, as described in Section 4.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "RQ1",
"sec_num": null
},
{
"text": "We have discussed the challenges of using BERT for text classification in the legal domains, and posed two research questions regarding the pretrain documents and how to cope with long documents. To answer the research questions, we conducted the experiments on the ECHR Violation Dataset and the Overruling Task Dataset. Our experimental result showed that the models pretrained on the domain similar to the task enhanced the performance. In addition, the experiments on ECHR Violation Dataset supported that truncating or discarding parts of a document resulted in a poor performance. Importantly, BigBird and Longformer, which explicitly handled long documents using different attention techniques, achieved the best performance on long legal document classification. Alternatively, applying BERT on chunks of texts before aggregating the vector representation across all of the chunks using an appropriate function (e.g. max or mean) could achieve a reasonable result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our preliminary results showed that 5 epochs resulted in most effective performances for most of the used models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Mean F1 \u00b1 STD BERT 0.9656 \u00b1 0.010 ECHR-Legal-BERT 0.9725 \u00b1 0.005 Harvard-Law-BERT 0.9756 \u00b10.010 RoBERTa 0.9683 \u00b1 0.010 BigBird 0.9570 \u00b1 0.010 LongFormer 0.9569 \u00b1 0.009",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05150"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural legal judgment prediction in english",
"authors": [],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4317--4323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in english. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4317-4323.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Obligation and prohibition extraction using hierarchical rnns",
"authors": [],
"year": 2018,
"venue": "Ion Androutsopoulos, and Achilleas Michos",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.03871"
]
},
"num": null,
"urls": [],
"raw_text": "Ilias Chalkidis, Ion Androutsopoulos, and Achilleas Michos. 2018. Obligation and prohibition ex- traction using hierarchical rnns. arXiv preprint arXiv:1805.03871.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "preparing the muppets for court",
"authors": [
{
"first": "",
"middle": [],
"last": "Legal-Bert",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "2898--2904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Legal-bert:\"preparing the muppets for court'\". In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2898-2904.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Paragraph-level rationale extraction through regularization: A case study on european court of human rights cases",
"authors": [],
"year": 2021,
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapat- sanis, Nikolaos Aletras, Ion Androutsopoulos, and Prodromos Malakasiotis. 2021. Paragraph-level ra- tionale extraction through regularization: A case study on european court of human rights cases. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, Mexico City, Mexico. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Results of the wnut2017 shared task on novel and emerging entity recognition",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nichols",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Van Erp",
"suffix": ""
},
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "140--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the wnut2017 shared task on novel and emerging entity recogni- tion. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. pages 4171-4186.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in neural information processing systems",
"volume": "25",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep convo- lutional neural networks. Advances in neural infor- mation processing systems, 25:1097-1105.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adapting phrase-based machine translation to normalise medical terms in social media messages",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1675--1680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nut Limsopatham and Nigel Collier. 2015. Adapting phrase-based machine translation to normalise med- ical terms in social media messages. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1675-1680.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Normalising medical concepts in social media texts by learning semantic representation",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1014--1023",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nut Limsopatham and Nigel Collier. 2016. Normalis- ing medical concepts in social media texts by learn- ing semantic representation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1014-1023.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Legal docket classification: Where machine learning stumbles",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "438--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati and Christopher D Manning. 2008. Legal docket classification: Where machine learning stumbles. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 438-446.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hierarchical transformers for long document classification",
"authors": [
{
"first": "Raghavendra",
"middle": [],
"last": "Pappagari",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Zelasko",
"suffix": ""
},
{
"first": "Jes\u00fas",
"middle": [],
"last": "Villalba",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)",
"volume": "",
"issue": "",
"pages": "838--844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raghavendra Pappagari, Piotr Zelasko, Jes\u00fas Villalba, Yishay Carmiel, and Najim Dehak. 2019. Hierar- chical transformers for long document classification. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 838-844. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Exploring the use of text classification in the legal domain",
"authors": [
{
"first": "Octavia-Maria",
"middle": [],
"last": "Sulea",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Mihaela",
"middle": [],
"last": "Vela",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liviu",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.09306"
]
},
"num": null,
"urls": [],
"raw_text": "Octavia-Maria Sulea, Marcos Zampieri, Shervin Mal- masi, Mihaela Vela, Liviu P Dinu, and Josef Van Genabith. 2017. Exploring the use of text classification in the legal domain. arXiv preprint arXiv:1710.09306.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Interpretable charge predictions for criminal cases: Learning to generate court views from fact descriptions",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhunchen",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Wenhan",
"middle": [],
"last": "Chao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1854--1864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Ye, Xin Jiang, Zhunchen Luo, and Wenhan Chao. 2018. Interpretable charge predictions for criminal cases: Learning to generate court views from fact descriptions. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 1854-1864.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Big bird: Transformers for longer sequences",
"authors": [
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Guru",
"middle": [],
"last": "Guruganesh",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Kumar Avinava Dubey",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Ainslie",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Ontanon",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Qifan",
"middle": [],
"last": "Ravula",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago On- tanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. In NeurIPS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Neel",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Brandon R Anderson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"E"
],
"last": "Henderson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ho",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law",
"volume": "",
"issue": "",
"pages": "159--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Zheng, Neel Guha, Brandon R Anderson, Peter Henderson, and Daniel E Ho. 2021. When does pre- training help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal hold- ings. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, pages 159-168.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "An example of fine-tuning BERT model on a classification task.",
"num": null
},
"TABREF0": {
"text": "to deal with long documents in the experiments. The used methods are as follows:",
"type_str": "table",
"content": "<table><tr><td>Fold</td><td colspan=\"7\"># Cases Max # Words Min # Words Avg. # Words Max # Labels Min # Labels Avg. # Labels</td></tr><tr><td>Training</td><td>9,000</td><td>35,426</td><td>69</td><td>1619.24</td><td>10</td><td>0</td><td>1.8</td></tr><tr><td>Development</td><td>1,000</td><td>14,493</td><td>84</td><td>1,784.03</td><td>7</td><td>0</td><td>1.7</td></tr><tr><td>Testing</td><td>1,000</td><td>15,919</td><td>101</td><td>1,925.73</td><td>6</td><td>1</td><td>1.7</td></tr><tr><td/><td/><td colspan=\"4\">Table 1: Statistics: ECHR Violation (Multi-Label).</td><td/><td/></tr><tr><td/><td># Cases</td><td/><td>2,400</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Max # Words</td><td>204</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Min # Words</td><td>1</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Avg. # Words</td><td>21.94</td><td/><td/><td/><td/></tr><tr><td colspan=\"4\">Ratio of Negative:Positive Labels 1:1.03</td><td/><td/><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"text": "Statistics: Overruling Task Dataset.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF2": {
"text": "Pre-trained BERT-based Models used in the experiment.",
"type_str": "table",
"content": "<table><tr><td>Approach</td><td>Micro</td></tr><tr><td/><td>F-1</td></tr><tr><td>RR-BERT</td><td>0.6466</td></tr><tr><td>RR-ECHR-Legal-BERT</td><td>0.6699</td></tr><tr><td>RR-Harvard-Law-BERT</td><td>0.6590</td></tr><tr><td>RR-RoBERTa</td><td>0.6656</td></tr><tr><td>RF-BERT</td><td>0.6803</td></tr><tr><td>RF-ECHR-Legal-BERT</td><td>0.7090</td></tr><tr><td>RF-Harvard-Law-BERT</td><td>0.6896</td></tr><tr><td>RF-RoBERTa</td><td>0.6925</td></tr><tr><td>MeanPool-BERT</td><td>0.7075</td></tr><tr><td colspan=\"2\">MeanPool-ECHR-Legal-BERT 0.7196</td></tr><tr><td colspan=\"2\">MeanPool-Harvard-Law-BERT 0.7009</td></tr><tr><td>MeanPool-RoBERTa</td><td>0.6949</td></tr><tr><td>MaxPool-BERT</td><td>0.7110</td></tr><tr><td colspan=\"2\">MaxPool-ECHR-Legal-BERT 0.7213</td></tr><tr><td colspan=\"2\">MaxPool-Harvard-Law-BERT 0.7010</td></tr><tr><td>MaxPool-RoBERTa</td><td>0.7000</td></tr><tr><td>BigBird</td><td>0.7308</td></tr><tr><td>LongFormer</td><td>0.7238</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"text": "Comparing the performances on ECHR Violation Dataset.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"text": "Comparing the performances, in terms of F1score, of different BERT pre-trainings on the Overruling Task Dataset.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}