ACL-OCL / Base_JSON /prefixN /json /nllp /2021.nllp-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:48.029338Z"
},
"title": "jurBERT: A Romanian BERT Model for Legal Judgement Prediction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Masala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Politehnica of Bucharest",
"location": {
"addrLine": "313 Splaiul Independentei",
"postCode": "060042",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "mihai_dan.masala@upb.ro"
},
{
"first": "Radu",
"middle": [],
"last": "Iacob",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Politehnica of Bucharest",
"location": {
"addrLine": "313 Splaiul Independentei",
"postCode": "060042",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "radu.iacob@upb.ro"
},
{
"first": "Ana",
"middle": [
"Sabina"
],
"last": "Uban",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bucharest",
"location": {
"addrLine": "14 Academiei",
"postCode": "010014",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "ana.uban@gmail.com"
},
{
"first": "Marina",
"middle": [],
"last": "Cidota",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bucharest",
"location": {
"addrLine": "14 Academiei",
"postCode": "010014",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "marina.cidota@gmail.com"
},
{
"first": "Horia",
"middle": [],
"last": "Velicu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BRD",
"location": {
"addrLine": "Groupe Societe Generale 1-7 Ion Mihalache",
"postCode": "0111171",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "horia.velicu@brd.ro"
},
{
"first": "Traian",
"middle": [],
"last": "Rebedea",
"suffix": "",
"affiliation": {},
"email": "train.rebedea@upb.ro"
},
{
"first": "Marius",
"middle": [],
"last": "Popescu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bucharest",
"location": {
"addrLine": "14 Academiei",
"postCode": "010014",
"settlement": "Bucharest",
"country": "Romania"
}
},
"email": "marius.popescu@fmi.unibuc.ro"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Transformer-based models have become the de facto standard in the field of Natural Language Processing (NLP). By leveraging large unlabeled text corpora, they enable efficient transfer learning leading to state-of-the-art results on numerous NLP tasks. Nevertheless, for low resource languages and highly specialized tasks, transformer models tend to lag behind more classical approaches (e.g. SVM, LSTM) due to the lack of aforementioned corpora. In this paper we focus on the legal domain and we introduce a Romanian BERT model pre-trained on a large specialized corpus. Our model outperforms several strong baselines for legal judgement prediction on two different corpora consisting of cases from trials involving banks in Romania.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Transformer-based models have become the de facto standard in the field of Natural Language Processing (NLP). By leveraging large unlabeled text corpora, they enable efficient transfer learning leading to state-of-the-art results on numerous NLP tasks. Nevertheless, for low resource languages and highly specialized tasks, transformer models tend to lag behind more classical approaches (e.g. SVM, LSTM) due to the lack of aforementioned corpora. In this paper we focus on the legal domain and we introduce a Romanian BERT model pre-trained on a large specialized corpus. Our model outperforms several strong baselines for legal judgement prediction on two different corpora consisting of cases from trials involving banks in Romania.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, a paradigm shift stormed the entire NLP field. Transformer (Vaswani et al., 2017) blocks allowed the development of large models that efficiently exploit the power of transfer learning. Pre-training transformers on large unlabeled text data, followed by a fast fine-tuning step has become the de facto approach across the field. Moreover, transformer based architectures (Devlin et al., 2019; Liu et al., 2020; Yang et al., 2019; Radford et al., 2018 Radford et al., , 2019 Brown et al., 2020; Zhang et al., 2019) have achieved state-of-the-art results on several generic NLP tasks ranging from natural language understanding (Wang et al., , 2019 Lai et al., 2017) , question answering (Rajpurkar et al., 2018; Reddy et al., 2019) to Textto-SQL (Yu et al., 2018 (Yu et al., , 2019b . Nevertheless, for low resource languages or highly specialized domains, pre-trained language models tend to underperform in part due to the lack of pre-training data or due to the generic nature of these large corpora. For this reason, specific BERT models have been trained and developed for numerous lan-guages such as French (Martin et al., 2020; Le et al., 2020) , Dutch (de Vries et al., 2019; Delobelle et al., 2020) , Romanian (Masala et al., 2020; Dumitrescu et al., 2020) , Finish (Virtanen et al., 2019) , Spanish (Ca\u00f1ete et al., 2020) and for highly specialized domains such as Science (Beltagy et al., 2019) , Legal (Chalkidis et al., 2020) or Biomedical .",
"cite_spans": [
{
"start": 76,
"end": 98,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 388,
"end": 409,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 410,
"end": 427,
"text": "Liu et al., 2020;",
"ref_id": null
},
{
"start": 428,
"end": 446,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF46"
},
{
"start": 447,
"end": 467,
"text": "Radford et al., 2018",
"ref_id": "BIBREF35"
},
{
"start": 468,
"end": 490,
"text": "Radford et al., , 2019",
"ref_id": "BIBREF36"
},
{
"start": 491,
"end": 510,
"text": "Brown et al., 2020;",
"ref_id": null
},
{
"start": 511,
"end": 530,
"text": "Zhang et al., 2019)",
"ref_id": null
},
{
"start": 643,
"end": 663,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF44"
},
{
"start": 664,
"end": 681,
"text": "Lai et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 703,
"end": 727,
"text": "(Rajpurkar et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 728,
"end": 747,
"text": "Reddy et al., 2019)",
"ref_id": "BIBREF38"
},
{
"start": 762,
"end": 778,
"text": "(Yu et al., 2018",
"ref_id": "BIBREF48"
},
{
"start": 779,
"end": 798,
"text": "(Yu et al., , 2019b",
"ref_id": null
},
{
"start": 1129,
"end": 1150,
"text": "(Martin et al., 2020;",
"ref_id": "BIBREF29"
},
{
"start": 1151,
"end": 1167,
"text": "Le et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 1170,
"end": 1199,
"text": "Dutch (de Vries et al., 2019;",
"ref_id": null
},
{
"start": 1200,
"end": 1223,
"text": "Delobelle et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 1235,
"end": 1256,
"text": "(Masala et al., 2020;",
"ref_id": "BIBREF30"
},
{
"start": 1257,
"end": 1281,
"text": "Dumitrescu et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 1291,
"end": 1314,
"text": "(Virtanen et al., 2019)",
"ref_id": "BIBREF43"
},
{
"start": 1325,
"end": 1346,
"text": "(Ca\u00f1ete et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1398,
"end": 1420,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 1429,
"end": 1453,
"text": "(Chalkidis et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we set out to investigate the possibility of adapting and applying BERT models for legal judgement prediction on a small, noisy dataset, in a low resource language (Romanian). The corpus we use is a realistic representation of the kind of machine-readable data that is available to practitioners in this specialized field. The data, provided by a Romanian bank, is composed of original lawsuit documents, and features the most frequent types of cases pertinent to the banking domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We publicly release the first, to the best of our knowledge, pre-trained BERT models 1 specialized for the Romanian juridical domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose and extensively analyze a general methodology for applying BERT models on real world juridical cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We obtain state-of-the-art results on a small, noisy, highly specialized industry-provided corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The legal domain provides a wide range of different tasks in which NLP techniques can and have been used. Such tasks include detection of argumentative sentences (Moens et al., 2007; Palau and Moens, 2009) , report summarization (Hachey and Grover, 2006; Galgani et al., 2012) or identification of the law areas that are relevant to a case (Boella et al., 2011; \u015eulea et al., 2017; Sulea et al., 2017) . Sulea et al. (2017) propose the usage of an ensemble of Support Vector Machines (SVMs) on word unigram and bigrams to solve three tasks related to French Supreme Court cases: predicting the law area of a case, predicting case ruling and estimating the time span of a given case or ruling. Similarly, Medvedeva et al. (2018) Their experiments show that further finetuning a general BERT model or training one from scratch on juridical data produces state-of-the-art results for legal text classification tasks. While both strategies are valid and the best one might depend on the given task, in our work we decide to pretrain a BERT model from scratch as we employ a significantly larger pre-training corpus (with a raw size of 160GB compared to only 12GB collected by Chalkidis et al. (2020) ). For small and noisy data, such as the real world BRDCases dataset we use in our work, large models may underperform compared to simpler models (Ezen-Can, 2020; Lai et al., 2021) . Lately, string kernels (Lodhi et al., 2000 (Lodhi et al., , 2002 , an efficient character-level comparison technique, have been used with promising results in low resource settings such as native language identification (Ionescu et al., 2016) , dialect identification Ionescu, 2018, 2019) , chat understanding (Masala et al., 2018) or automated essay scoring (Cozma et al., 2018) .",
"cite_spans": [
{
"start": 162,
"end": 182,
"text": "(Moens et al., 2007;",
"ref_id": "BIBREF33"
},
{
"start": 183,
"end": 205,
"text": "Palau and Moens, 2009)",
"ref_id": "BIBREF34"
},
{
"start": 229,
"end": 254,
"text": "(Hachey and Grover, 2006;",
"ref_id": "BIBREF17"
},
{
"start": 255,
"end": 276,
"text": "Galgani et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 340,
"end": 361,
"text": "(Boella et al., 2011;",
"ref_id": "BIBREF4"
},
{
"start": 362,
"end": 381,
"text": "\u015eulea et al., 2017;",
"ref_id": null
},
{
"start": 382,
"end": 401,
"text": "Sulea et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 404,
"end": 423,
"text": "Sulea et al. (2017)",
"ref_id": "BIBREF39"
},
{
"start": 704,
"end": 727,
"text": "Medvedeva et al. (2018)",
"ref_id": "BIBREF32"
},
{
"start": 1172,
"end": 1195,
"text": "Chalkidis et al. (2020)",
"ref_id": "BIBREF9"
},
{
"start": 1359,
"end": 1376,
"text": "Lai et al., 2021)",
"ref_id": "BIBREF23"
},
{
"start": 1402,
"end": 1421,
"text": "(Lodhi et al., 2000",
"ref_id": "BIBREF28"
},
{
"start": 1422,
"end": 1443,
"text": "(Lodhi et al., , 2002",
"ref_id": "BIBREF27"
},
{
"start": 1599,
"end": 1621,
"text": "(Ionescu et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 1647,
"end": 1667,
"text": "Ionescu, 2018, 2019)",
"ref_id": null
},
{
"start": 1689,
"end": 1710,
"text": "(Masala et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 1738,
"end": 1758,
"text": "(Cozma et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The first dataset we employ, RoJur, comprises all the final rulings, containing both civil and criminal cases, published by any Romanian civil court between 2010 and 2018. Each sample contains: a description of the involved parties, a summary of the critical arguments made by the plaintiffs and the defendants, the legal reasoning behind the verdict and the final verdict itself. The names of the entities involved, as well as other identification details are anonymized throughout the document. Notably, the document is written by a human expert (i.e. the judge presiding over the case) who may have Table 2 : jurBERT NSP and MLM performance on the evaluation corpus restructured or rephrased the original arguments made by the involved parties. We note that RoJur is a private corpus that can be rented for a significant fee. We devise a second dataset, RoBanking, from rulings encountered in RoJur. Specifically, we extract common types of cases pertinent to the banking domain (e.g. administration fee litigations, enforcement appeals). From each ruling we only keep the summary of the arguments provided by the plaintiffs and the defendants, and a boolean value denoting which party was favoured in the final verdict.",
"cite_spans": [],
"ref_spans": [
{
"start": 602,
"end": 609,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "Finally, we use BRDCases, representing a collection of cases in which a particular Romanian bank (BRD Groupe Soci\u00e9t\u00e9 G\u00e9n\u00e9rale Romania) was directly involved. Each sample contains a section with the arguments provided by the plaintiff and a section for those provided by the defendant. The content of each section is extracted from the original lawsuit files. The plaintiff section is obtained through an OCR process and by employing heuristics to remove content that may be irrelevant to the case. Consequently, the text is likely to contain typographical errors and other artifacts. Moreover, there may be significant differences in writing style, stemming from the possible gap in juridical knowledge between the involved parties. However, this type of input is a realistic representation of the machine readable data that is available to the attorneys handling a specific case in a Romanian bank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "Statistics pertaining to each dataset are presented in Table 1 . The size of RoJur (160 GB as stored on disk) enabled us to pretrain a BERT model from scratch for the Romanian juridical domain. The remaining datasets, RoBanking and BRDCases, were used for downstream applications.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "For all intents and purposes we stick to the same model architecture and training procedure proposed by Devlin et al. (2019) . We opt to train two variants of jurBERT, namely jurBERT-base and jurBERTlarge, with Whole Word Masking (WWM), each with the same vocabulary of 33k tokens, for 40 epochs on a v3-8 TPU (kindly provided by Tensorflow Research Cloud 2 ). For efficiency reasons we train with sequence lengths of 128 for 90% of the training steps, while for the last 10% of steps we use sequence lengths of 512. Evaluation results on the pre-training corpus, RoJur, are depicted in Table 2 .",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 587,
"end": 594,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model -jurBERT",
"sec_num": "4"
},
{
"text": "We evaluate our pre-trained model on the task of predicting whether the final verdict in a legal case is favourable to the plaintiff or the defendant. To this end, we leverage RoBanking and BRDCases. Despite the similarity in structure, there are significant differences between the two datasets, as presented in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "We extensively explore different fine-tuning strategies, to enable efficient transfer learning and to prevent catastrophic forgetting. Inspired by previous approaches (Araci, 2019; Howard and Ruder, 2018; Sun et al., 2019) we investigate several strategies for dealing with long texts (e.g. using the first, middle or last part of the text), pooling type (i.e. <CLS>, mean or max), layer unfreezing (e.g. optimize all weights all throughout the training process, gradual unfreezing of layers), learning rate (i.e. constant, discriminative or slanted triangular learning rate), dropout value, final fully connected layers (sizes and numbers) and different combinations of mentioned strategies. We note that the setup for finding the best training strategy is iterative: we test all aspects of a given strategy, select the best and only then moving to the next step while retaining previous strategies. Henceforth, we refer to the best strategy 3 as the optimized strategy.",
"cite_spans": [
{
"start": 167,
"end": 180,
"text": "(Araci, 2019;",
"ref_id": "BIBREF0"
},
{
"start": 181,
"end": 204,
"text": "Howard and Ruder, 2018;",
"ref_id": "BIBREF18"
},
{
"start": 205,
"end": 222,
"text": "Sun et al., 2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Both downstream tasks are framed as binary classification tasks (i.e. given the arguments, who wins the case). The results are reported using k-fold cross validation, with 5 folds for RoBanking and 10 folds for BRDCases. Cross-entropy loss is minimized using the Adam optimizer (Kingma and Ba, 2015) as each model is trained 3 times. Finally we report the mean AUC and the standard deviation for each model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The results on RoBanking, using only the plaintiff's plea, are presented in half of Table 3 introduces the considered baselines, namely two standard CNN and BI-LSTM models with an attention mechanism, followed by three variants of a state-of-the-art Romanian BERT model, RoBERT (Masala et al., 2020) . More details regarding the baselines can be found in Appendix B. Lastly, we introduce our proposed model with its two variants. Note that for the upper half of Table 3 we use a classic finetuning strategy as proposed by Devlin et al. (2019) . In the lower half of Table 3 we present results using the best finetuning strategy. First, we notice jurBERT consistently outperforms the considered baselines in any setting. One interesting observation is that while jurBERTlarge outperforms its base counterpart on the NSP and MLM tasks, it lags behind on downstream task performance irrespective of training strategy. Finally, incorporating the defendant's plea, leads to significant improvements for all considered models, as can be seen in Table 4 . In Table 5 we present the results on BRDCases. As this dataset is rather small and contains a significant amount of noisy data and very long texts, the challenge posed is significantly harder than in the case of RoBanking. Therefore, we notice the lower overall AUC score compared to the results for RoBanking. As this dataset is extremely small (only 149 entries) we introduce a simple Support Vector Machine (SVM) with string kernels as baseline. More details about the configuration used for the baseline can be found in Appendix B. In the first part of Table 5 we also present the model fine-tuned on RoBanking without further training on BRD-Cases. In the second half of Table 5 we present the results obtained by fine-tuning two different models on BRDCases, one only pre-trained and one pre-trained and further fine-tuned on RoBanking. While the two corpora differ in essence, fine-tuning on RoBanking greatly improves the downstream performance on BRDCases. Finally, we note the importance of the pre-training step (on RoJur corpus) as jurBERT consistently and significantly outperforms RoBERT for all considered models and experiments, but especially for the real-word use case (Table 5) .",
"cite_spans": [
{
"start": 278,
"end": 299,
"text": "(Masala et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 522,
"end": 542,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 462,
"end": 469,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 566,
"end": 573,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1039,
"end": 1046,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1052,
"end": 1059,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1606,
"end": 1613,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1725,
"end": 1732,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 2236,
"end": 2245,
"text": "(Table 5)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Lastly, we investigate the effectiveness of simple handcrafted features for the legal judgement prediction task. Handcrafted features include the county and the year for each case and, while already present in text, they were also added in the final decision layer in categorical form (one-hot encoding). Experiments with said features are marked accordingly (+ handcrafted) in Tables 3,4 and 5. In the case of RoBanking, the added features are not especially relevant, yielding mixed results: same or worse mean AUC when using only the plaintiff's plea; see Table 3 ), and slightly better results overall when using both the plaintiff's and the defendant's pleas (see Table 4 ). However, for BRDCases, handcrafted features provide a consistent improvement of around 2% absolute value for both RoBERT and jurBERT models (see Table 5 ). Our best model, jurBERT with added handcrafted features significantly outperforms the considered baseline with an almost 4% absolute value mean AUC increase.",
"cite_spans": [],
"ref_spans": [
{
"start": 559,
"end": 566,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 669,
"end": 676,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 825,
"end": 832,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In this work, to the best of our knowledge, we employed the first study on applicability of stateof-the-art NLP methods for Romanian legal judgement prediction. We pre-trained, released and evaluated our models with promising results on two highly practical datasets, RoBanking and BRD-Cases. On the first dataset, that contains a humangenerated summary of key arguments, our model, jurBERT, outperforms the considered baselines. Turning to the second dataset, that contains all the original arguments of the involved parties, jurBERT is just slighty better than much simpler models, as it struggles to handle such long texts. Especially in this case, the limitations of BERT-like models with regards to the maximum input size are a significant factor that hampers their performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our proposed methodology for legal judgment prediction on real world cases involves three steps. The first step is pre-training a BERT model on a general purpose collection of cases (in our case RoJur). The second step includes further training on a subset of the previous corpus (in our case RoBanking), in which the model learns to predict the verdict having access only to the summarized arguments of the involved parties. The final and the most important step in our work is training and evaluating on the industry-provided cases. One of our key findings is that the second step in this methodology is crucial for obtaining good results for legal judgment prediction. We emphasize that this methodology is language independent and can easily be applied to similar tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Major improvement areas for our approach are the development and integration of more refined handcrafted features (e.g. the type of court or the identity of the judge) and tackling the problem of long texts that greatly exceed the maximum input size of our model. For the latter, lines of research include summarization of long texts or employing methods of increasing the inherent sequence length limit of transformer models (Zaheer et al., 2020; Beltagy et al., 2020 ",
"cite_spans": [
{
"start": 426,
"end": 447,
"text": "(Zaheer et al., 2020;",
"ref_id": null
},
{
"start": 448,
"end": 468,
"text": "Beltagy et al., 2020",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Below are the details regarding the process of searching for the best strategy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "\u2022 Dealing with long texts, how to trim sequences longer than 512 tokens: first tokens, last tokens, first 128 tokens with the last 382 tokens, first 512 tokens aggregated with last 512 tokens, or first 512 tokens aggregated with middle 512 tokens and last 512 tokens. Best strategy for this step was: first 512 tokens concatenated with the last 512 tokens. This leads to a final representation (after BERT layer) of size 1,536 for base model and 2,048 for large model. Aggregation methods include concatenation, mean and max pooling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "\u2022 Pooling type: <CLS> token, mean or max pooling. Best strategy for this step: <CLS> token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "\u2022 BERT-layer unfreezing: training the full model from the first step, training only the classification layers for a number of epochs followed by training the whole model for another number of epochs, gradually unfreezing a number of layers per epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "\u2022 Learning rate: constant learning rate of 1e-5, 2e-5 or 5e-5, discriminative learning rate with decay factor of 0.95 or 0.90, slanted triangular learning rate with maximum learning rate of 1e-4, 2.5e-5 or 5e-5, cutout fraction of 0.1 and ratio of 32. The best strategy for this step: slanted triangular learning rate with maximum learning rate of 2e-5, cutout 0.1 and ratio of 32. Table 6 : Detailed results on RoBanking using only the plea of the plaintiff.",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 389,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "\u2022 Dropout value applied after BERT-layer: dropout values of 0.1, 0.25 or 0.5. The best value was obtained using a dropout value of 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "\u2022 Configuration of fully connected layers after 128) or (128, 64) or (256, 128, 64) or (128, 64, 32) . Best configuration is (128,64).",
"cite_spans": [
{
"start": 48,
"end": 61,
"text": "128) or (128,",
"ref_id": null
},
{
"start": 62,
"end": 74,
"text": "64) or (256,",
"ref_id": null
},
{
"start": 75,
"end": 79,
"text": "128,",
"ref_id": null
},
{
"start": 80,
"end": 92,
"text": "64) or (128,",
"ref_id": null
},
{
"start": 93,
"end": 96,
"text": "64,",
"ref_id": null
},
{
"start": 97,
"end": 100,
"text": "32)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "For more details and results for each individual component, refer to Table 6. ",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Table 6.",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Strategy search",
"sec_num": null
},
{
"text": "https://www.tensorflow.org/tfrc 3 More details about the process of finding the best and final training configuration can be found in Appendix A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "\u2022 SVM with string kernels uses the combination of intersection, presence and spectrum string kernels on 5-7 character n-grams\u2022 CNN with 300 feature maps of length 6, sequence lengths of 800 words, Adam Optimizer, learning rate = 0.001, dropout = 0.3\u2022 BI-LSTM model comprises a BI-LSTM encoder with a global attention mechanism and a fully connected layer with 64 neurons. For the BI-LSTM encoder we used a dropout layer with 0.2 probability, and for the fully connected layer 0.1 dropout probability. The maximum sequence length is set to 800 and the input consists of Word2Vec embeddings of size 100, pretrained on the data reserved for training. We used Adam optimizer with default parameters (0.01 learning rate).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Baselines Hyperparameters",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Finbert: Financial sentiment analysis with pre-trained language models",
"authors": [
{
"first": "Dogu",
"middle": [],
"last": "Araci",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10063"
]
},
"num": null,
"urls": [],
"raw_text": "Dogu Araci. 2019. Finbert: Financial sentiment analy- sis with pre-trained language models. arXiv preprint arXiv:1908.10063.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SciB-ERT: A pretrained language model for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1371"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "3615--3620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05150"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using classification to support legal knowledge engineers in the eunomos legal document management system",
"authors": [
{
"first": "Guido",
"middle": [],
"last": "Boella",
"suffix": ""
},
{
"first": "Luigi",
"middle": [
"Di"
],
"last": "Caro",
"suffix": ""
},
{
"first": "Llio",
"middle": [],
"last": "Humphreys",
"suffix": ""
}
],
"year": 2011,
"venue": "Fifth international workshop on Juris-informatics (JURISIN)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guido Boella, Luigi Di Caro, and Llio Humphreys. 2011. Using classification to support legal knowl- edge engineers in the eunomos legal document man- agement system. In Fifth international workshop on Juris-informatics (JURISIN).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Jared",
"middle": [
"D"
],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Askell",
"suffix": ""
},
{
"first": "Sandhini",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Ariel",
"middle": [],
"last": "Herbert-Voss",
"suffix": ""
},
{
"first": "Gretchen",
"middle": [],
"last": "Krueger",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Henighan",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ziegler",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Clemens",
"middle": [],
"last": "Winter",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hesse",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Sigler",
"suffix": ""
},
{
"first": "Mateusz",
"middle": [],
"last": "Litwin",
"suffix": ""
}
],
"year": null,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "1877--1901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert- Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "MO-ROCO: The Moldavian and Romanian dialectal corpus",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "688--698",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1068"
]
},
"num": null,
"urls": [],
"raw_text": "Andrei Butnaru and Radu Tudor Ionescu. 2019. MO- ROCO: The Moldavian and Romanian dialectal cor- pus. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 688-698, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unibuckernel reloaded: First place in arabic dialect identification for the second year in a row",
"authors": [
{
"first": "Andrei",
"middle": [
"M"
],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2018,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei M. Butnaru and Radu Tudor Ionescu. 2018. Unibuckernel reloaded: First place in arabic dialect identification for the second year in a row. CoRR, abs/1805.04876.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Spanish pre-trained bert model and evaluation data",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Ca\u00f1ete",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chaperon",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Fuentes",
"suffix": ""
},
{
"first": "Jou-Hui",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Hojin",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge P\u00e9rez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos",
"authors": [
{
"first": "Ilias",
"middle": [],
"last": "Chalkidis",
"suffix": ""
},
{
"first": "Manos",
"middle": [],
"last": "Fergadiotis",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "2898--2904",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.261"
]
},
"num": null,
"urls": [],
"raw_text": "Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2898- 2904, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automated essay scoring with string kernels and word embeddings",
"authors": [
{
"first": "M\u0203d\u0203lina",
"middle": [],
"last": "Cozma",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "503--509",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2080"
]
},
"num": null,
"urls": [],
"raw_text": "M\u0203d\u0203lina Cozma, Andrei Butnaru, and Radu Tudor Ionescu. 2018. Automated essay scoring with string kernels and word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 503-509, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bertje: A dutch bert model",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Wietse De Vries",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.09582"
]
},
"num": null,
"urls": [],
"raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. Bertje: A dutch bert model. arXiv preprint arXiv:1912.09582.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Robbert: a dutch roberta-based language model",
"authors": [
{
"first": "Pieter",
"middle": [],
"last": "Delobelle",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Winters",
"suffix": ""
},
{
"first": "Bettina",
"middle": [],
"last": "Berendt",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.06286"
]
},
"num": null,
"urls": [],
"raw_text": "Pieter Delobelle, Thomas Winters, and Bettina Berendt. 2020. Robbert: a dutch roberta-based language model. arXiv preprint arXiv:2001.06286.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The birth of Romanian BERT",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dumitrescu",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4324--4328",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.387"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of Romanian BERT. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 4324-4328, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Aysu Ezen-Can. 2020. A comparison of lstm and bert for small corpus",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.05451"
]
},
"num": null,
"urls": [],
"raw_text": "Aysu Ezen-Can. 2020. A comparison of lstm and bert for small corpus. arXiv preprint arXiv:2009.05451.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Combining different summarization techniques for legal text",
"authors": [
{
"first": "Filippo",
"middle": [],
"last": "Galgani",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Compton",
"suffix": ""
},
{
"first": "Achim",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the workshop on innovative hybrid approaches to the processing of textual data",
"volume": "",
"issue": "",
"pages": "115--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filippo Galgani, Paul Compton, and Achim Hoffmann. 2012. Combining different summarization tech- niques for legal text. In Proceedings of the workshop on innovative hybrid approaches to the processing of textual data, pages 115-123.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Extractive summarisation of legal texts",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Hachey",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 2006,
"venue": "Artificial Intelligence and Law",
"volume": "14",
"issue": "4",
"pages": "305--345",
"other_ids": {
"DOI": [
"10.1007/s10506-007-9039-z"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Hachey and Claire Grover. 2006. Extractive sum- marisation of legal texts. Artificial Intelligence and Law, 14(4):305-345.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1031"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "String kernels for native language identification: Insights from behind the curtains",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Radu Tudor Ionescu",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cahill",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "3",
"pages": "491--525",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00256"
]
},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu, Marius Popescu, and Aoife Cahill. 2016. String kernels for native language identifica- tion: Insights from behind the curtains. Computa- tional Linguistics, 42(3):491-525.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A general approach for predicting the behavior of the supreme court of the united states",
"authors": [
{
"first": "",
"middle": [],
"last": "Daniel Martin",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Katz",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "Bommarito",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Blackman",
"suffix": ""
}
],
"year": 2017,
"venue": "PLOS ONE",
"volume": "12",
"issue": "4",
"pages": "1--18",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0174698"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Martin Katz, Michael J. Bommarito, II, and Josh Blackman. 2017. A general approach for pre- dicting the behavior of the supreme court of the united states. PLOS ONE, 12(4):1-18.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "RACE: Large-scale ReAding comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAd- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "BERT might be overkill: A tiny but effective biomedical entity linker based on residual convolutional neural networks",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuan Lai, Heng Ji, and ChengXiang Zhai. 2021. BERT might be overkill: A tiny but effective biomedical entity linker based on residual convolutional neural networks. CoRR, abs/2109.02237.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "FlauBERT: Unsupervised language model pre-training for French",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Vial",
"suffix": ""
},
{
"first": "Jibril",
"middle": [],
"last": "Frej",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Segonne",
"suffix": ""
},
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Crabb\u00e9",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Didier",
"middle": [],
"last": "Schwab",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2479--2490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Le, Lo\u00efc Vial, Jibril Frej, Vincent Segonne, Max- imin Coavoux, Benjamin Lecouteux, Alexandre Al- lauzen, Benoit Crabb\u00e9, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised language model pre-training for French. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2479-2490, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Text classification using string kernels",
"authors": [
{
"first": "Huma",
"middle": [],
"last": "Lodhi",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Watkins",
"suffix": ""
}
],
"year": 2002,
"venue": "J. Mach. Learn. Res",
"volume": "2",
"issue": "",
"pages": "419--444",
"other_ids": {
"DOI": [
"10.1162/153244302760200687"
]
},
"num": null,
"urls": [],
"raw_text": "Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. J. Mach. Learn. Res., 2:419-444.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Text classification using string kernels",
"authors": [
{
"first": "Huma",
"middle": [],
"last": "Lodhi",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"J C H"
],
"last": "Watkins",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "563--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. 2000. Text classification using string kernels. In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA, pages 563-569. MIT Press.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7203--7219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary, \u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "RoBERT -a Romanian BERT model",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Masala",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ruseti",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Dascalu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6626--6637",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.581"
]
},
"num": null,
"urls": [],
"raw_text": "Mihai Masala, Stefan Ruseti, and Mihai Dascalu. 2020. RoBERT -a Romanian BERT model. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6626-6637, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Help me understand this conversation: Methods of identifying implicit links between cscl contributions",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Masala",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ruseti",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Gutu-Robu",
"suffix": ""
},
{
"first": "Traian",
"middle": [],
"last": "Rebedea",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Dascalu",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Trausan-Matu",
"suffix": ""
}
],
"year": 2018,
"venue": "Lifelong Technology-Enhanced Learning",
"volume": "",
"issue": "",
"pages": "482--496",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-319-98572-5_37"
]
},
"num": null,
"urls": [],
"raw_text": "Mihai Masala, Stefan Ruseti, Gabriel Gutu-Robu, Tra- ian Rebedea, Mihai Dascalu, and Stefan Trausan- Matu. 2018. Help me understand this conversation: Methods of identifying implicit links between cscl contributions. In Lifelong Technology-Enhanced Learning, pages 482-496, Cham. Springer Interna- tional Publishing.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Judicial decisions of the european court of human rights: Looking into the crystal ball",
"authors": [
{
"first": "Masha",
"middle": [],
"last": "Medvedeva",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Vols",
"suffix": ""
},
{
"first": "Martijn",
"middle": [],
"last": "Wieling",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Legal Studies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masha Medvedeva, Michel Vols, and Martijn Wieling. 2018. Judicial decisions of the european court of human rights: Looking into the crystal ball. In Pro- ceedings of the Conference on Empirical Legal Stud- ies.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Automatic detection of arguments in legal texts",
"authors": [
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Boiy",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th International Conference on Artificial Intelligence and Law, ICAIL '07",
"volume": "",
"issue": "",
"pages": "225--230",
"other_ids": {
"DOI": [
"10.1145/1276318.1276362"
]
},
"num": null,
"urls": [],
"raw_text": "Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th International Conference on Artificial Intelli- gence and Law, ICAIL '07, page 225-230, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Argumentation mining: The detection, classification and structure of arguments in text",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Mochales Palau",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09",
"volume": "",
"issue": "",
"pages": "98--107",
"other_ids": {
"DOI": [
"10.1145/1568234.1568246"
]
},
"num": null,
"urls": [],
"raw_text": "Raquel Mochales Palau and Marie-Francine Moens. 2009. Argumentation mining: The detection, clas- sification and structure of arguments in text. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09, page 98-107, New York, NY, USA. Association for Com- puting Machinery.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2124"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784- 789, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "CoQA: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "249--266",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00266"
]
},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Exploring the use of text classification in the legal domain",
"authors": [
{
"first": "Octavia-Maria",
"middle": [],
"last": "Sulea",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Mihaela",
"middle": [],
"last": "Vela",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liviu",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.09306"
]
},
"num": null,
"urls": [],
"raw_text": "Octavia-Maria Sulea, Marcos Zampieri, Shervin Mal- masi, Mihaela Vela, Liviu P Dinu, and Josef Van Genabith. 2017. Exploring the use of text classification in the legal domain. arXiv preprint arXiv:1710.09306.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Predicting the law area and decisions of French Supreme Court cases",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Octavia-Maria\u015fulea",
"suffix": ""
},
{
"first": "Mihaela",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Vela",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "716--722",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_092"
]
},
"num": null,
"urls": [],
"raw_text": "Octavia-Maria\u015eulea, Marcos Zampieri, Mihaela Vela, and Josef van Genabith. 2017. Predicting the law area and decisions of French Supreme Court cases. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing, RANLP 2017, pages 716-722, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "How to fine-tune bert for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "194--206",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-030-32381-3_16"
]
},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In Chinese Computational Linguistics, pages 194- 206, Cham. Springer International Publishing.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Multilingual is not enough: Bert for finnish",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Virtanen",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Ilo",
"suffix": ""
},
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.07076"
]
},
"num": null,
"urls": [],
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. arXiv preprint arXiv:1912.07076.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, volume 32. Curran As- sociates, Inc.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems, volume 32. Curran Associates, Inc.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Heyang",
"middle": [],
"last": "Er",
"suffix": ""
},
{
"first": "Suyi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Tianze",
"middle": [],
"last": "Chern Tan",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Youxuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Sungrok",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Shim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Luyao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuwen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shreya",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dixit",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Lasecki",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1962--1979",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1204"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vin- cent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019a. CoSQL: A conversational text-to-SQL challenge towards cross- domain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1962- 1979, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Dongxu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingning",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Shanelle",
"middle": [],
"last": "Roman",
"suffix": ""
},
{
"first": "Zilin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3911--3921",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1425"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"html": null,
"text": "Dataset statistics. Size is presented in terms of jurBERT tokens with mean and median values separated by /.",
"content": "<table><tr><td>and predict cases from the European Court of Hu-</td></tr><tr><td>man Rights. Katz et al. (2017) use random forest</td></tr><tr><td>classifiers over handcrafted features (based rather</td></tr><tr><td>on the context of the case than on the textual argu-</td></tr><tr><td>ments) to predict the ruling of the Supreme Court</td></tr><tr><td>of the United States. Chalkidis et al. (2020) investi-</td></tr><tr><td>gate the usage of BERT models on multiple legal</td></tr><tr><td>corpora.</td></tr></table>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "The upper",
"content": "<table><tr><td>Model</td><td>Strategy</td><td colspan=\"2\">Mean AUC Std AUC</td></tr><tr><td>CNN</td><td>-</td><td>79.60</td><td>*</td></tr><tr><td>BI-LSTM</td><td>-</td><td>80.99</td><td>0.26</td></tr><tr><td>RoBERT-small</td><td>classic</td><td>68.81</td><td>0.13</td></tr><tr><td>RoBERT-base</td><td>classic</td><td>78.52</td><td>0.09</td></tr><tr><td>RoBERT-large</td><td>classic</td><td>79.43</td><td>0.28</td></tr><tr><td>jurBERT-base</td><td>classic</td><td>81.01</td><td>0.19</td></tr><tr><td>jurBERT-large</td><td>classic</td><td>80.38</td><td>0.32</td></tr><tr><td>RoBERT-small</td><td>optimized</td><td>70.54</td><td>0.28</td></tr><tr><td>RoBERT-base</td><td>optimized</td><td>79.74</td><td>0.21</td></tr><tr><td colspan=\"2\">+ handcrafted -</td><td>79.82</td><td>0.11</td></tr><tr><td>RoBERT-large</td><td>optimized</td><td>76.53</td><td>5.43</td></tr><tr><td>jurBERT-base</td><td>optimized</td><td>81.47</td><td>0.18</td></tr><tr><td colspan=\"2\">+ handcrafted -</td><td>81.40</td><td>0.18</td></tr><tr><td>jurBERT-large</td><td>optimized</td><td>78.38</td><td>1.77</td></tr></table>",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "Results on RoBanking using only the plea of the plaintiff.",
"content": "<table><tr><td>Model</td><td>Strategy</td><td colspan=\"2\">Mean AUC Std AUC</td></tr><tr><td>BI-LSTM</td><td>-</td><td>84.60</td><td>0.59</td></tr><tr><td>RoBERT-base</td><td>optimized</td><td>84.40</td><td>0.26</td></tr><tr><td colspan=\"2\">+ handcrafted -</td><td>84.43</td><td>0.15</td></tr><tr><td>jurBERT-base</td><td>optimized</td><td>86.63</td><td>0.23</td></tr><tr><td colspan=\"2\">+ handcrafted -</td><td>86.73</td><td>0.22</td></tr><tr><td>jurBERT-large</td><td>classic</td><td>82.04</td><td>0.64</td></tr></table>",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table/>",
"num": null
},
"TABREF7": {
"type_str": "table",
"html": null,
"text": "Results on BRDCases. \u2020 denotes models that were first finetuned on RoBanking.* marks models with no further training on BRDCases, inference-only.",
"content": "<table/>",
"num": null
},
"TABREF8": {
"type_str": "table",
"html": null,
"text": "). Radev. 2019b. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511-4523, Florence, Italy. Association for Computational Linguistics.",
"content": "<table><tr><td>Dragomir Manzil Zaheer, Guru Guruganesh, Kumar Avinava</td></tr><tr><td>Dubey, Joshua Ainslie, Chris Alberti, Santiago On-</td></tr><tr><td>tanon, Philip Pham, Anirudh Ravula, Qifan Wang,</td></tr><tr><td>Li Yang, et al. 2020. Big bird: Transformers for</td></tr><tr><td>longer sequences. Advances in Neural Information</td></tr><tr><td>Processing Systems, 33.</td></tr><tr><td>Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang,</td></tr><tr><td>Maosong Sun, and Qun Liu. 2019. ERNIE: En-</td></tr><tr><td>hanced language representation with informative en-</td></tr><tr><td>tities. In Proceedings of the 57th Annual Meet-</td></tr><tr><td>ing of the Association for Computational Linguis-</td></tr><tr><td>tics, pages 1441-1451, Florence, Italy. Association</td></tr><tr><td>for Computational Linguistics.</td></tr></table>",
"num": null
}
}
}
}