{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:01.393938Z" }, "title": "Automated Extraction of Sentencing Decisions from Court Cases in the Hebrew Language", "authors": [ { "first": "Mohr", "middle": [], "last": "Wenger", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hebrew University of Jerusalem", "location": { "settlement": "Jerusalem", "country": "Israel" } }, "email": "mohr.wenger@mail.huji.ac.il" }, { "first": "Tom", "middle": [], "last": "Kalir", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hebrew University of Jerusalem", "location": { "settlement": "Jerusalem", "country": "Israel" } }, "email": "" }, { "first": "Noga", "middle": [], "last": "Berger", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Association of Rape Crisis Centers", "location": { "country": "Israel" } }, "email": "" }, { "first": "\u2020", "middle": [], "last": "Carmit", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Association of Rape Crisis Centers", "location": { "country": "Israel" } }, "email": "" }, { "first": "Klar", "middle": [], "last": "Chalamish", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Renana", "middle": [], "last": "Keydar", "suffix": "", "affiliation": {}, "email": "renana.keydar@mail.huji.ac.il" }, { "first": "Gabriel", "middle": [], "last": "Stanovsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hebrew University of Jerusalem", "location": { "settlement": "Jerusalem", "country": "Israel" } }, "email": "gabriel.stanovsky@mail.huji.ac.il" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present the task of Automated Punishment Extraction (APE) in sentencing decisions from criminal court cases in Hebrew. Addressing APE will enable the identification of sentenc ing patterns and constitute an important step ping stone for many follow up legal NLP ap plications in Hebrew, including the prediction of sentencing decisions. We curate a dataset of sexual assault sentencing decisions and a manuallyannotated evaluation dataset, and implement rulebased and supervised models. We find that while supervised models can iden tify the sentence containing the punishment with good accuracy, rulebased approaches outperform them on the full APE task. We con clude by presenting a first analysis of sentenc ing patterns in our dataset and analyze com mon models' errors, indicating avenues for fu ture work, such as distinguishing between pro bation and actual imprisonment punishment. We will make all our resources available upon request, including data, annotation, and first benchmark models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present the task of Automated Punishment Extraction (APE) in sentencing decisions from criminal court cases in Hebrew. Addressing APE will enable the identification of sentenc ing patterns and constitute an important step ping stone for many follow up legal NLP ap plications in Hebrew, including the prediction of sentencing decisions. We curate a dataset of sexual assault sentencing decisions and a manuallyannotated evaluation dataset, and implement rulebased and supervised models. We find that while supervised models can iden tify the sentence containing the punishment with good accuracy, rulebased approaches outperform them on the full APE task. We con clude by presenting a first analysis of sentenc ing patterns in our dataset and analyze com mon models' errors, indicating avenues for fu ture work, such as distinguishing between pro bation and actual imprisonment punishment. We will make all our resources available upon request, including data, annotation, and first benchmark models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The legal world is rife with data, from constitu tions and national legislation to legal cases and court decisions. Much of the legal data, however, comes in unstructured formats that pose critical challenges for extracting and analyzing it in sys tematic ways. In addition, different countries vary in their legal systems, norms and conventions, fur ther compounding the challenges in developing multilingual approaches (Peruginelli, 2009) .", "cite_spans": [ { "start": 421, "end": 440, "text": "(Peruginelli, 2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While legal NLP is gaining traction in recent years (Van Gog and Van Engers, 2001\u037e Dale, 2019\u037e Zhong et al., 2020 , relatively little attention has been given to lowresource settings outside of the English language, where the availability of tools * Equal contribution. such as large pretrained language models, syntac tic parsers, or named entity recognizers is limited.", "cite_spans": [ { "start": 52, "end": 113, "text": "(Van Gog and Van Engers, 2001\u037e Dale, 2019\u037e Zhong et al., 2020", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, conducted as part of an ongoing collaboration with The Association of Rape Cri sis Centers in Israel (ARCCI), we focus specif ically on the task of Automated Punishment Ex traction (APE) in sexual assault cases in Hebrew within Israeli court sentencing decisions (see for mal task definition in Section 2). Punishment deci sions are of special importance as they constitute a prerequisite for many other downstream tasks in legal NLP and digital humanities, such as le gal prediction of judicial decisions (Aletras et al., 2016\u037e Branting et al., 2021 and detecting biases in court decisions (Pinto et al., 2020) . APE is dif ficult in the Israeli court system. This is due to the fact that sentencing decisions for criminal of fences are reported, in natural language idiomatic to the legal field, in the written sentencing deci sion. We focus on sexual assault cases due to the legal and public debate around claims of lenient punishments (Phillips and Chagnon, 2020) , that in the absence of systematic rigorous data collec tion cannot be empirically examined and assessed. This worldwide debate requires legal NLP meth ods in multiple languages and legal systems. 1 To address this challenge, we begin by curat ing a dataset of sexual assault sentencing decisions from the years 19902021 and manually annotate punishment in a subset of 100 cases with the use of legal experts in our team and in collaboration with ARCCI (Section 3). Following, in Section 4, we use this data to build several models for the APE task, including rulebased and supervised meth ods, based on linguistically and semantically in formed features, setting first benchmark results on the APE task in Hebrew. We thoroughly analyze our models' performance in Section 5, finding that they are capable of extracting the correct punish ment in 68% of the cases,while the best model's av erage error is roughly 5 months, attesting to the dif ficulty of the task. Based on our models, we find that in our data the median predicted punishment is 3 years, while more than a third of the punish ments are below 15 months. Although these fig ures are obtained on a mediumsize corpus, using automatic measures which do not account for the type of offense, we note that they are well below the maximum punishments for sexual offenses as determined by the Israeli legislator, which range between 27 years for indecent acts and sodomy and up to 20 years for aggravated rape.", "cite_spans": [ { "start": 520, "end": 564, "text": "(Aletras et al., 2016\u037e Branting et al., 2021", "ref_id": null }, { "start": 605, "end": 625, "text": "(Pinto et al., 2020)", "ref_id": null }, { "start": 954, "end": 982, "text": "(Phillips and Chagnon, 2020)", "ref_id": "BIBREF9" }, { "start": 1181, "end": 1182, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conclude by analyzing common error pat terns in our models. For example, we find that models often tend to erroneously extract a proba tion imprisonment punishment instead of the ac tual imprisonment punishment. Distinguishing be tween the two is left as an interesting avenue for future work. To the best of our knowledge, this is the first examination of automatic punishment extraction in the Hebrew language. It includes data collection, annotation, and benchmark mod els. We hope it will spur further research into this important task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We define the task of APE as the process of auto matically extracting the punishment from the sen tencing decision. In the Israeli legal system, the punishment is given in a separate sentencing de cision, following a plea bargain or a guilty ver dict. In the sentencing decision, the court can impose different types of punishment: imprison ment, probation, or community service. In addi tion, the court can also impose fines and order the defendant to pay restitution to the victim. We consider all of the punitive elements mentioned as part of the APE process. However, in this work, we focus on the extraction of the actual imprison ment (i.e. jail time). Given the text of a sentenc ing decision, we first need to distinguish between the different types of punishment (imprisonment, probation, community work, fines, etc.)\u037e then, we need to extract only the sentence that relates the duration of the actual imprisonment penalty, i.e. the number of months or years in prison imposed on the defendant. This is particularly challeng ing since the court decision often includes both the duration of the actual imprisonment, as well as the duration of the conditional imprisonment (i.e. probation). Both are referred to in Hebrew using the same term``Ma'asar'' (lit. imprisonment), and indicated by the same units of months and years. For example (translation and emphasis by the au thors):`W e impose on the defendant the fol lowing punishment: 48 months im prisonment, of which the defendant will serve 30 months actual imprison ment and the rest, 18 months, will be conditional imprisonment...(CrimC 1124/04)''.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "In this case, the APE task is to extract``30 months'' as the the actual imprisonment punish ment. This also exemplifies the typical linguistic difficulty of the task The noun``imprisonment'' repeats three times, referring first to the total pun ishment imposed, then to the actual imprisonment, and then to the probation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "This section describes the construction of our cor pus, which to the best of our knowledge is the first annotated legal corpus of sentencing decisions in sexual assault cases in Hebrew. \u05bf In Section 3.1 we discuss the cases comprising the corpus, consist ing of 30 years of sentencing decisions in sexual offense cases, and in Section 3.2 we present the manual annotation schema of the different types of punishment and the duration (in months and years) of the actual or conditional imprisonment which the courts imposed in these cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Corpus of Annotated Sentencing Decisions in Israeli Law", "sec_num": "3" }, { "text": "We compiled a corpus containing sentencing deci sions from Israel Magistrate and District Courts, from the years 1990 2021, as collected by Nevo legal database. 2 All the cases in the corpus deal with sexual offenses under sections 345351 of the Israel Penal Law, 57371977, including offenses of rape, sodomy, indecent acts and sex offenses within the family. The characteristics of our corpus are presented in Table 1 and Figure 1 . This corpus, which is available upon request, directly lends itself for the quantitative exploration of sentencing and punish ment patterns in sexual assault cases in the Israeli legal system, as well as for other areas of crimi nal law. In total this includes 1043 cases, 181k sentences and 3M words, of which we annotated a subset of 100 cases that include 13k sentences and 210k words. The sentences vary much in length with an average length of 16.5+15.4 words in all files and 25+16.5 words in the annotated subset. 3", "cite_spans": [], "ref_spans": [ { "start": 411, "end": 418, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 423, "end": 431, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data Collection", "sec_num": "3.1" }, { "text": "We set out to annotate the punishment as defined in Section 2 in a sample of 100 sentencing deci sions. We achieve this in two semiautomatic an notation steps as exemplified in Table 2 and elab orated below. This setup was found to be useful both in terms of the annotation quality, as well as in providing direct supervision signal for the inter mediate tasks. All annotations were done by legal scholars and practitioners or under their guidance and supervision. Imprisonment Sentence Identification. This is a sentence level, binary annotation task, as exem plified in the third column in Table 2 (labelled`P rison [Y/N]''). We identified that the actual im prisonment is often contained within a single sen tence in the decision. Sometimes this sentence also contains the conditioned punishment, for ex ample, see Table 2 , where the third row shows the verdict of 48 months imprisonment, of which 30 months are actual imprisonment and the remain ing 18 are conditioned. In other cases where the actual and conditional imprisonment are in sepa rated sentences, we were interested in the actual imprisonment. In Table 2 we see four different sentences that contain the word``imprisonment'', however only the third sentence contains the ac tual imprisonment imposed on the defendant. In this case it contains also the conditioned imprison ment but this is not always the case, in section 5 we will see how this affects our models' performance.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 2", "ref_id": null }, { "start": 592, "end": 611, "text": "Table 2 (labelled`P", "ref_id": null }, { "start": 818, "end": 825, "text": "Table 2", "ref_id": null }, { "start": 1115, "end": 1122, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "Naturally, the vast majority of sentences should be labelled negatively, as most of the sentences do not convey the punishment. To ease the anno tation process, we automatically labelled as nega tive all sentences which did not contain a phrase from a predefined list of words indicating sentenc ing decisions and which were found to convey the punishment in our data. Each sentence was linked to its document, so that in cases of ambiguity we could evaluate the single sentence against the full judicial decision to reach a conclusive annotation. This resulted in negative annotation for 11.2K sen tences in our dataset (85%). The remaining sen tences (15%) were manually annotated with the guidance and supervision of legal scholars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "A major challenge in annotating these remain ing sentences was differentiating between the pun ishment imposed on the defendant in this partic ular case and the discussion of previous punish ments. For example, reference to punishments that were imposed on the defendant in previous cases or punishments that were given in similar cases which then serve to establish the punishment standard. These often use similar terminology to that of the current punishment, for example see the second row of Table 2 . In other cases, a sentence containing the imprisonment in the current case is followed by a sentence which activates a previous probation. In such cases, we annotate both sen tences as conveying an imprisonment. This annotation step resulted in 132 sentences annotated positively with either actual imprison ment or probation, while the remaining 13K sen tences were marked negatively, either automati cally or by human experts. This annotation aver aged 1.26 sentence marked positive for conveying the punishment per case, thus matching our intu ition that the punishment in each decision tends to The prosecution's position on the desired punishment for the defendant. We attempt to rule it out based on the verb \"request\" and based on the fact that there are no numbers in this sentence. A reference in the decision to punishments that were imposed in prior cases, usually as example for standard of punishment. We attempt to rule it out based on the past tense of the verb \"was sentenced\" and characters such as \"/\" that mark the docket number of a prior court case. Combined punishment statement consisting of an actual imprisonment and probation. We attempt to extract only the number of months of actual imprisonment [30], while ruling out the total months [48] and the probation months [18] . In cases of combined punishment, we do not rule out the sentence.", "cite_spans": [ { "start": 1771, "end": 1775, "text": "[48]", "ref_id": null }, { "start": 1801, "end": 1805, "text": "[18]", "ref_id": null } ], "ref_spans": [ { "start": 497, "end": 504, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "One way is to check if it contains the term \"and the rest\", which indicates the actual imprisonment, preceding the term \"and the rest\" 30 \u202b\u05d1\u05d9\u05d5\u05dd\u202c \u202b\u05d9\u05d7\u05dc\u202c \u202b\u05d1\u05e4\u05d5\u05e2\u05dc,\u202c \u202b\u05d4\u05de\u05d0\u05e1\u05e8\u202c \u202b\u05e2\u05d5\u05e0\u05e9\u202c .31", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "Actual imprisonment will start on 31.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "Procedural orders regarding the execution of the imprisonment. These are normally short sentences that include the word for imprisonment and a number (normally either date or hour), which render them very confusing for our models. In addition, in many cases they appear not as a full sentence but in fact truncated immediately after the first period due to limitations of sentence extraction. We attempt to rule them out by the fact that they do not contain a time unit. 0 \u202b\u05e9\"\u05d7\u202c 5,000 \u202b\u05d1\u05e9\u05d9\u05e2\u05d5\u05e8\u202c \u202b\u05e7\u05e0\u05e1\u202c \u202b\u05ea\u05e9\u05dc\u05d5\u05dd\u202c \u202b\u05ea\u05d7\u05ea\u05d9\u05d5.\u202c \u202b\u05de\u05d0\u05e1\u05e8\u202c \u202b\u05d9\u05de\u05d9\u202c 30 \u202b\u05d0\u05d5\u202c A fine of 5,000 NIS or 30 days imprisonment instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "A fine that is given in addition to the actual imprisonment. The fine can be substituted by a 30-day imprisonment alternative. We attempt to rule it out based on the word \"fine\" that does not appear in a sentence reflecting the actual imprisonment. 0 Table 2 : APE annotation example, including all the sentences in which the Hebrew word for imprisonment appears. We provide an example from our data for some of the challenges of this task. We refer to some of these examples once more in the models' error analyses in Section 5. For brevity's sake we condense the two annotation phases into a single \"prison time\" column, which is marked zero if the sentence does not convey a punishment.", "cite_spans": [], "ref_spans": [ { "start": 251, "end": 258, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "be conveyed in a single sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "Imprisonment Time Annotation. In the second stage, we manually annotate an integer denoting the duration (in months) of imprisonment, as ex emplified in the last column in Table 2 (denoted`P rison time''). We presented our annotators with the sentences found in the previous stage to con tain an imprisonment punishment, and ask them to label the number of actual imprisonment time in months. For example, the third sentence in Ta ble 2 is annotated with 30 months of imprisonment. Overall, punishments vary between 0 months (no actual imprisonment) to 168 months (14 years ac tual imprisonment). The average punishment was 32 months, and the median was 15 months.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 179, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "We present several models for predicting the im prisonment incurred in free text sentencing deci sions. The highlevel approach, depicted in Fig ure 2 , is composed of two steps, following the hu man annotation process, described in the previous section. First, we identify sentences containing the imprisonment punishment (Section 4.1), from which we extract the term itself, and normalize to number of imprisonment months (Section 4.2).", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 150, "text": "Fig ure 2", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "4" }, { "text": "First, we try to find the sentences conveying the imprisonment. We start with a keyword based ap proach to filter a subset of relevant sentences (e.g., the Hebrew word for imprisonment). This allows us to reduce the number of sentences per case from hundreds or even thousands to approximately 14 sentences per case on average.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "Following, we aim to extract one sentence pre dicted to contain the imprisonment. We experi ment with a rulebased approach, and several ma chine learning models, including SVM and ran dom forest. In all models we use linguistic as well as structural documentlevel features, such as the position of the sentence within the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "Rule-based approach. This approach consists of a scoring system for several keywords, com piled based on the authors' legal expertise. Specif ically, we created four lists: two lists with strong and moderate words that indicate this is the target sentence, hence``positive words'', and two lists in cluding strong and moderate words that indicate the sentence probably does not include the im prisonment decision in the case, hence``negative words'', Each of these were heuristically scored based on an heldout development set. A sentence was deemed positive if and only if its score sur passes a threshold, determined as well based on the development set. See the full details in our codebase, to be made available upon publication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "\u2022 Strong positive words: verbs that indicate the judicial decision on the punishment, such as`s entencing'',``deciding'',``imposing'' etc., all in present tense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "\u2022 Moderate positive words: include the infini tive form of the strong positive words. In Hebrew, these can be used both as past and present tense, which is why we decided to score these moderately, in case these were used to refer to past decisions, which the judge uses to establish the punishment stan dard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "\u2022 Moderate negative words: characters such as brackets and backslash that indicate a refer ence to a docket number, usually of a previ ous legal case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "\u2022 Strong negative words: Hebrew words relat ing a request or petition brought before the court regarding the desired punishment, usu ally by one of the sides, as opposed to the fi nal judicial decision which is an order of the court.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "Supervised modeling. This approach consists of using features similar to the rulebased ap proach, and experimenting with different machine learning techniques for determining their weights. This is divided to two stages: Stage 1 identify ing punishment sentences by assigning probabil ities and choosing all sentences above a thresh old. Stage 2 extracting a single sentence in each document, which includes the actual impris onment, since our final goal is extracting the num ber of imprisonment months. For this we perform an argmax over the probabilities assigned by the model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "M (D) = arg max s\u2208D P \u03b8 M (s)(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "Where D is a legal case, composed of a list of sentences s, M denotes different models whose weights are denoted with \u03b8 M , and M (D) is the pre dicted sentence from case D according to model Figure 2 : High-level diagram of our models for extracting duration of imprisonment from court decisions (left) to imposed punishment (right). We begin by identifying candidate sentences for containing the imprisonment (Section 4.1), followed by an extraction of the imprisonment term, in months (Section 4.2).", "cite_spans": [], "ref_spans": [ { "start": 192, "end": 200, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "M . Within this framework, we compare two mod els: support vector machine (SVM\u037e Cortes and Vapnik 1995) and random forest (RF\u037e Ho 1995), both trained and tested using crossvalidation on the annotated subset.", "cite_spans": [ { "start": 80, "end": 103, "text": "Cortes and Vapnik 1995)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Imprisonment Sentence Detection", "sec_num": "4.1" }, { "text": "After identifying a candidate sentence for impris onment, we implement the following pipeline to extract the number of months of imprisonment in curred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting the number of months of imprisonment", "sec_num": "4.2" }, { "text": "Identifying numbers in Hebrew. First, we iden tify all candidate numbers in the sentence. To achieve this we use a regular expression for each digit, as well as rules for converting multipledigit numbers. The Hebrew number, similar to English, has a basic form that appears if it is between 0 10, an indicator for if it is between 1020 and a slightly different form for if it is a multiplication of ten (i.e., twenty, thirty). However, there are also sev eral differences that pose challenges unique to He brew:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting the number of months of imprisonment", "sec_num": "4.2" }, { "text": "\u2022 Suffixes that are range dependant we intro duced different rules to account for Hebrew morphology in different number ranges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting the number of months of imprisonment", "sec_num": "4.2" }, { "text": "\u2022 One year/ One month punishment can be de duced only by elimination Hebrew does not have a nonspecific determiner, so a punish ment of one year for example can be phrased as``year of imprisonment'', without a time unit (``one'') or a determiner. Similarly, for any number above twenty, it is accepted in Hebrew to mention the time unit in single form rather than plural form (``20 year''). We overcome this challenge by using elimination for determining the found time unit was an in dicator for 1 year / 1 month. Table 3 : Full task evaluation using the sentences extracted by the different models. It shows that in the overall task the rule based models achieved the highest accuracy and also the lowest average distance from the ground truth of actual imprisonment (manually tagged).", "cite_spans": [], "ref_spans": [ { "start": 515, "end": 522, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Extracting the number of months of imprisonment", "sec_num": "4.2" }, { "text": "\u2022 Spelling variation Spelling in Hebrew of ten varies due to its treatment of vowels, which are sometimes indicated with diacrit ics (Niqqud), or omitted altogether (Ravid and Haimowitz, 2006) . To address this, we account for all the possible vowels and sylla ble combination for all digits.", "cite_spans": [ { "start": 165, "end": 192, "text": "(Ravid and Haimowitz, 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Extracting the number of months of imprisonment", "sec_num": "4.2" }, { "text": "Identifying the imprisonment duration. Fol lowing, we aim to find the number that indi cates the length of actual imprisonment. First, we checked the following heuristic is this sen tence of the form``The total sentence of Z units of time, which consists of X actual imprisonment time and Y conditioned imprisonment'' (see row 3 of table 2), if so, we return the value X. This was done by checking if there are exactly three num bers mentioned, and if Z=X+Y. If that is in fact the case, then we return X as the actual imprison ment. Otherwise, we created a scoring method, which looks for certain features, such as the dis tance between the number and some time unit indi cator such as``years'' or``months'', or the distance between the number and the end of the document. This was also locally assessed, i.e. there was no absolute threshold and the number with the best score within each sentence was chosen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting the number of months of imprisonment", "sec_num": "4.2" }, { "text": "In this section we analyze the performance of the models described in section 4 in the different Table 4 : A comparison between the first stage of the supervised approaches in their ability to identify the sentence that includes the punishment. Note this is the ability to extract 2 -5 sentences of which only one is correct, for the full APE task we need to choose the correct one.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 104, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "stages of the task. The main results are presented in Table 3 . In addition, we examine supervised model performance on sentence identification in Table 4 , perform manual error analysis in Table 5 , and plot punishment trends on the entire corpus in Figure 1 . We draw several conclusions based on these results.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 3", "ref_id": null }, { "start": 147, "end": 154, "text": "Table 4", "ref_id": null }, { "start": 190, "end": 197, "text": "Table 5", "ref_id": null }, { "start": 251, "end": 259, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "The supervised models present high accuracy in the sentence identification task. Both super vised models in Table 4 show high recall in tag ging the sentences which convey the imprison ment. However, they also tag some additional false positive sentences, hence decreasing the pre cision rate. In total, this results in between 2 and 5 sentences tagged as positive in each document.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "The supervised models' probability is not wellcalibrated. The APE task requires choosing one sentence from which the number of months of ac tual imprisonment is extracted in the next stage. This means that for each false positive sentence there is exactly one equivalent false negative. Hence, in our case precision equals recall. In the rulebased approach we noticed that a differ ent threshold applies for each legal case, hence we scored them separately. Our assumption was that the supervised models would score similar features to those used in the rulebased in a more accurate way. Thus we attempted a similar local threshold approach, by performing argmax on the learner probability of all sentences from the same legal case, as defined in Equation 1. However this was not the case, as observed in Table 3 , the rule based approach achieves better results in extract ing a single sentence and also in extracting the actual imprisonment time. This points to the su pervised models' probability not being calibrated, perhaps due to the low resrouce domain and small number of samples.", "cite_spans": [], "ref_spans": [ { "start": 805, "end": 812, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "All models tend to confuse probation with actual imprisonment. Error analysis in Table 5 shows that the most common error was extract ing a sentence with the probation rather than the actual imprisonment. We remind that``proba tion'' in Hebrew is phrased``conditional imprison ment'', which may lead to this confusion. In many cases, probation and actual imprisonment are pro nounced in one sentence. In other cases, the pro bation directly follows the pronouncement of the actual imprisonment, and has similar syntactic and semantic cues.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "Other error patterns. Rulebased and RF er rors are similar, containing mostly references from past cases. This type of error includes sen tencing decisions either of similar crimes or past cases of the defendant (see Table 2 , row 2). These sentences are similar in structure to those reflect ing the actual imprisonment, and also confused legal expert annotators. In contrast, SVM errs on extracting sentences describing fines (accom panied by an alternative of imprisonment) or pro cedures regarding the execution of the incurred imprisonment rather than sentences reflecting ac tual imprisonment duration. While those cases in clude a number and the Hebrew word for impris onment, they are easily ruled out by human annota tors. Both learners use this word as a feature, how ever SVM still makes mistakes classifying fines.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "Inter-annotator agreement reveals the limitations of the sentence-level approach. We asked legal experts to evaluate the rulebased perfor mance by rating each sentence that the rulebased model predicted as whether it reflects the actual imprisonment or not. This is the same task the models were required to perform. We used Co hen's kappa to measure the inter annotators agree ment for each pair, presented at table 6 and Fleiss' kappa (Fleiss, 1971) for measuring the level of agreement between five different anno tators and three different classes (sentence is in dicative of punishment/is not indicative of punish ment/cannot decide). The annotators achieved a score of 0.341 which is considered a fair agree ment (Viera et al., 2005) . On average the taggers managed to correctly find the actual imprisonment in 79% of the sentences. In many cases the an notators expressed doubt regarding their ability to tag the sentence as the actual imprisonment solely based on the single sentence extracted by the algo Table 5 : Error analysis for all three models (Rule-Based, Support-Vector Machine, and Random Forest). We used for imprisonment sentence detection. Sentences that contain probation are the only common cause for errors in all models. In both supervised approaches they are also responsible for the highest percent of errors. For the rule based we observe that references to past cases were more confusing, this was also confusing for the manual tagging task. In total the RF and rule based were more similar in their error analysis than the SVM. The remaining errors for each classifier did not fall under a common category and could be generally defined as miscellaneous.", "cite_spans": [ { "start": 437, "end": 451, "text": "(Fleiss, 1971)", "ref_id": "BIBREF5" }, { "start": 719, "end": 739, "text": "(Viera et al., 2005)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 1015, "end": 1022, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "ann1 ann2 ann3 ann4 ann5 ann1 -0.13 0.46 0.36 0.34 ann2 0.13 -0.14 0.31 0.42 ann3 0.46 0.14 -0.48 0.34 ann4 0.36 0.31 0.48 -0.63 ann5 0.34 0.42 0.34 0.63 - Table 6 : Agreement between each pair of annotators in terms of Cohen's Kappa. The 5-way agreement between all annotators is 0.341 Fleiss' kappa.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 163, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "rithm, without the context of the full judicial deci sion. This suggests a limit to the ability to deter mine whether a sentence from a legal document contains an actual imprisonment time, without the larger document context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "There is room for improvement in extracting the actual imprisonment sentence. Table 3 demonstrates the different models' performances in the full APE task. In this case, the rulebased approach performs best with an average error of 5 months, while supervised models reach an aver age error of 1011.6 months. This table also shows how sentence extraction accuracy alone does not predict the ability to succeed in APE task. This is also affected by the type of mistakes, i.e. when the wrong sentence is predicted, the number of months extracted is not directly related to the ac tual imprisonment. However, given the correct sentence, extracting the duration of imprisonment was accurate in 89.7% of the cases. Therefore improvements could be achieved by better extrac tion of the actual imprisonment sentence. Future work may consider separately tagging the proba tion sentence, as its structure might be easier for the model to learn. Once learned, it could be used as an anchor. This problem could also bene fit from employing contextualized representations such as adapting a Hebrew language model, such as Alephbert (Seker et al., 2021) to the legal do main, an approach recently shown effective in En glish (Chalkidis et al., 2020) .", "cite_spans": [ { "start": 1119, "end": 1139, "text": "(Seker et al., 2021)", "ref_id": "BIBREF13" }, { "start": 1211, "end": 1235, "text": "(Chalkidis et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "A post-hoc summary of the legal decision improves performance. Nevo, the legal database we use, provides a``mini ratio'', a posthoc sum mary of each decision in a few sentences, writ ten by Nevo's editorial team. When we add this mini ratio to the annotation, it increases the super vised models' ability to extract the target sentence by about 50%, showing that shorter inputs lead to better generalization on smallscale datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "Identifying sentencing patterns in sexual assault cases in Israel. Using our best perform ing model, we present rough statistics regarding the punishments given in the past thirty years in Figure 3 . The median sentenced punishment throughout all legal decisions of our data is 36 months, however we observe most commonly punishments are under a year. While the sen tencing decisions are generally available in legal search engines, annotating them is an expensive process. For this reason many statistical observa tions are hard to obtain. This demonstrates the po tential contribution of our task from a sociolegal point of view.", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 197, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "Most related to our work are those that bring to gether domain expertise with ML models to ex tract information from specialized texts. This is the case for Soh et al. (2019) who showed that con ditional random fields perform better than DNN for sentence border detection for the legal domain. While this is considered a closed problem in NLP they showed that this is not the case for legal texts. The distribution of predicted punishments in our corpus. These were extracted using our rule-based model which performed best on this task. Median predicted punishment is 3 years, while more than a third of the punishments are below 15 months.", "cite_spans": [ { "start": 157, "end": 174, "text": "Soh et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Similar paradigms can be found in the medical field, where domain knowledge also plays a cru cial role. Malmasi et al. (2019) show that in some cases a rulebased approach achieves better per formances than SVM. Chalkidis et al. (2020) re cently introduced a language model finetuned su perficially for the English legal domain. Taking a similar approach for the APE task is an interesting avenue for future work.", "cite_spans": [ { "start": 104, "end": 125, "text": "Malmasi et al. (2019)", "ref_id": "BIBREF7" }, { "start": 211, "end": 234, "text": "Chalkidis et al. (2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this work we created the first annotated cor pus of Hebrew language sentencing decisions, fo cusing on sexual assaults. We compared a rule based approach with supervised learners using the unique attributes of the legal language for repre senting sentences. We found that the rulebased approach achieved best results with an average er ror rate of 5 months and accuracy of 68% in ex tracting the punishment sentence. Our analysis shows that such research could focus on fine tun ing of the supervised models. While supervised learning models help us narrow down a full legal document to 2 5 sentences that include the pun ishment, further research can contribute in reach ing a single target sentence, which could also ben efit from our error analysis, especially regarding the probation sentences, perhaps targeting them separately in a prior task and using them as fea tures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "https://www.haaretz.com/israel-news/.premiumwomen-decry-lenient-rape-sentence-1.5383195, https://balkaninsight.com/2021/04/05/victims-discouragedby-lenient-sentences-for-sex-crimes-in-serbia/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.nevo.co.il. The data does not represent all the cases that were held in court but only those that were documented in the Nevo database.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The high variance in decisions' length in the legal domain is due in part to the difficulty in segmenting legal texts, as noted bySanchez (2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their help ful comments and feedback. This work was sup ported in part by a research gift from the Allen In stitute for AI and by a research grant from the Cen ter for Interdisciplinary Data Science Research (CIDR) at the Hebrew University of Jerusalem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Pre dicting judicial decisions of the european court of human rights: A natural language processing per spective", "authors": [ { "first": "Nikolaos", "middle": [], "last": "Aletras", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Tsarapatsanis", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Preo\u0163iucpietro", "suffix": "" }, { "first": "Vasileios", "middle": [], "last": "Lampos", "suffix": "" } ], "year": 2016, "venue": "PeerJ Computer Science", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preo\u0163iucPietro, and Vasileios Lampos. 2016. Pre dicting judicial decisions of the european court of human rights: A natural language processing per spective. PeerJ Computer Science, 2:e93.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Brandy Weiss, Mark Pfaff, and Bill Liao. 2021. Scalable and explainable legal prediction", "authors": [ { "first": "Karl", "middle": [], "last": "Branting", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Pfeifer", "suffix": "" }, { "first": "Bradford", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": null, "venue": "Artificial Intelligence and Law", "volume": "29", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L Karl Branting, Craig Pfeifer, Bradford Brown, Lisa Ferro, John Aberdeen, Brandy Weiss, Mark Pfaff, and Bill Liao. 2021. Scalable and explainable legal prediction. Artificial Intelligence and Law, 29(2):213238.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Prodromos Malakasiotis, Nikolaos Aletras, and Ion An droutsopoulos", "authors": [ { "first": "Ilias", "middle": [], "last": "Chalkidis", "suffix": "" }, { "first": "Manos", "middle": [], "last": "Fergadiotis", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.02559" ] }, "num": null, "urls": [], "raw_text": "Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion An droutsopoulos. 2020. Legalbert: The mup pets straight out of law school. arXiv preprint arXiv:2010.02559.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Support vector networks", "authors": [ { "first": "Corinna", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine learning", "volume": "20", "issue": "3", "pages": "273--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support vector networks. Machine learning, 20(3):273 297.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Law and word order: Nlp in le gal tech", "authors": [ { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 2019, "venue": "Natural Language Engineering", "volume": "25", "issue": "1", "pages": "211--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Dale. 2019. Law and word order: Nlp in le gal tech. Natural Language Engineering, 25(1):211 217.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Measuring nominal scale agree ment among many raters", "authors": [ { "first": "L", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "", "middle": [], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological bulletin", "volume": "76", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree ment among many raters. Psychological bulletin, 76(5):378.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Random decision forests", "authors": [ { "first": "Kam", "middle": [], "last": "Tin", "suffix": "" }, { "first": "", "middle": [], "last": "Ho", "suffix": "" } ], "year": 1995, "venue": "Pro ceedings of 3rd international conference on docu ment analysis and recognition", "volume": "1", "issue": "", "pages": "278--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tin Kam Ho. 1995. Random decision forests. In Pro ceedings of 3rd international conference on docu ment analysis and recognition, volume 1, pages 278 282. IEEE.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Comparison of nat ural language processing techniques in analysis of sparse clinical data: insulin decline by patients", "authors": [ { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Wendong", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Naoshi", "middle": [], "last": "Hosomura", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Turchin", "suffix": "" } ], "year": 2019, "venue": "AMIA Summits on Translational Science Proceed ings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shervin Malmasi, Wendong Ge, Naoshi Hosomura, and Alexander Turchin. 2019. Comparison of nat ural language processing techniques in analysis of sparse clinical data: insulin decline by patients. AMIA Summits on Translational Science Proceed ings, 2019:610.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Accessing legal informa tion across boundaries: a new challenge", "authors": [ { "first": "Ginevra", "middle": [], "last": "Peruginelli", "suffix": "" } ], "year": 2009, "venue": "Inter national Journal of Legal Information", "volume": "37", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ginevra Peruginelli. 2009. Accessing legal informa tion across boundaries: a new challenge. Inter national Journal of Legal Information, 37(3):276 305.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "six months is a joke\": Carceral feminism and penal pop ulism in the wake of the stanford sexual assault case", "authors": [ { "first": "D", "middle": [], "last": "Nickie", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Phillips", "suffix": "" }, { "first": "", "middle": [], "last": "Chagnon", "suffix": "" } ], "year": 2020, "venue": "Feminist criminology", "volume": "15", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nickie D Phillips and Nicholas Chagnon. 2020. \"six months is a joke\": Carceral feminism and penal pop ulism in the wake of the stanford sexual assault case. Feminist criminology, 15(1):4769.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Is abel Margarida Duarte, Catarina Vaz Warrot, and Rui SousaSilva. 2020. Biased language detection in court decisions", "authors": [ { "first": "Alexandra", "middle": [ "Guedes" ], "last": "Pinto", "suffix": "" }, { "first": "Henrique", "middle": [ "Lopes" ], "last": "Cardoso", "suffix": "" } ], "year": null, "venue": "Intelligent Data Engineering and Automated Learning IDEAL 2020", "volume": "", "issue": "", "pages": "402--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandra Guedes Pinto, Henrique Lopes Cardoso, Is abel Margarida Duarte, Catarina Vaz Warrot, and Rui SousaSilva. 2020. Biased language detection in court decisions. In Intelligent Data Engineering and Automated Learning IDEAL 2020, pages 402 410, Cham. Springer International Publishing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The vowel path: Learning about vowel representation in writ ten hebrew", "authors": [ { "first": "Dorit", "middle": [], "last": "Ravid", "suffix": "" }, { "first": "Sarit", "middle": [], "last": "Haimowitz", "suffix": "" } ], "year": 2006, "venue": "Written Language & Literacy", "volume": "9", "issue": "1", "pages": "67--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorit Ravid and Sarit Haimowitz. 2006. The vowel path: Learning about vowel representation in writ ten hebrew. Written Language & Literacy, 9(1):67 93.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sentence boundary detection in legal text", "authors": [ { "first": "George", "middle": [], "last": "Sanchez", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Natural Legal Language Processing Workshop", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/W19-2204" ] }, "num": null, "urls": [], "raw_text": "George Sanchez. 2019. Sentence boundary detection in legal text. In Proceedings of the Natural Legal Language Processing Workshop 2019, pages 3138, Minneapolis, Minnesota. Association for Computa tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Alephbert:a hebrew large pre trained language model to startoff your hebrew nlp application with", "authors": [ { "first": "Amit", "middle": [], "last": "Seker", "suffix": "" }, { "first": "Elron", "middle": [], "last": "Bandel", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Bareket", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Brusilovsky", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Shaked Greenfeld, and Reut Tsarfaty. 2021. Alephbert:a hebrew large pre trained language model to startoff your hebrew nlp application with.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Legal area classification: A comparative study of text classifiers on Singapore Supreme Court judg ments", "authors": [ { "first": "Jerrold", "middle": [], "last": "Soh", "suffix": "" }, { "first": "Khang", "middle": [], "last": "How", "suffix": "" }, { "first": "Ian", "middle": [ "Ernst" ], "last": "Lim", "suffix": "" }, { "first": "", "middle": [], "last": "Chai", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Natural Legal Lan guage Processing Workshop", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/W19-2208" ] }, "num": null, "urls": [], "raw_text": "Jerrold Soh, How Khang Lim, and Ian Ernst Chai. 2019. Legal area classification: A comparative study of text classifiers on Singapore Supreme Court judg ments. In Proceedings of the Natural Legal Lan guage Processing Workshop 2019, pages 6777, Minneapolis, Minnesota. Association for Computa tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Modeling leg islation using natural language processing", "authors": [ { "first": "R", "middle": [], "last": "Van Gog", "suffix": "" }, { "first": "T", "middle": [ "M" ], "last": "Van Engers", "suffix": "" } ], "year": 2001, "venue": "2001 IEEE International Conference on Systems, Man and Cybernetics. eSystems and eMan for Cybernet ics in Cyberspace (Cat.No.01CH37236)", "volume": "1", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ICSMC.2001.969873" ] }, "num": null, "urls": [], "raw_text": "R. Van Gog and T.M. Van Engers. 2001. Modeling leg islation using natural language processing. In 2001 IEEE International Conference on Systems, Man and Cybernetics. eSystems and eMan for Cybernet ics in Cyberspace (Cat.No.01CH37236), volume 1, pages 561566 vol.1.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Under standing interobserver agreement: the kappa statis tic", "authors": [ { "first": "J", "middle": [], "last": "Anthony", "suffix": "" }, { "first": "Joanne", "middle": [ "M" ], "last": "Viera", "suffix": "" }, { "first": "", "middle": [], "last": "Garrett", "suffix": "" } ], "year": 2005, "venue": "Fam med", "volume": "37", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony J Viera, Joanne M Garrett, et al. 2005. Under standing interobserver agreement: the kappa statis tic. Fam med, 37(5):360363.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "How does nlp benefit legal system: A summary of legal artificial intelligence", "authors": [ { "first": "Haoxi", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Chaojun", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Cunchao", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Tianyang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.12158" ] }, "num": null, "urls": [], "raw_text": "Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. How does nlp benefit legal system: A summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Distribution of legal cases in our corpus by year. The small number of cases before year 2000 is probably due to changes in digitization of legal documents." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Figure 3: The distribution of predicted punishments in our corpus. These were extracted using our rule-based model which performed best on this task. Median predicted punishment is 3 years, while more than a third of the punishments are below 15 months." }, "TABREF1": { "html": null, "type_str": "table", "num": null, "text": "Statistics of our annotated data, referring to the full corpus as well as to the annotated subset. In both cases, each legal decision contains many sentences and the length of the decisions varies considerably.", "content": "
Hebrew example | English translation | Comments | Punish- |
ment | |||
\u202b\u05d4\u05ea\u05d5\u05d1\u05e2\u05ea\u202c \u202b\u05e2\u05ea\u05e8\u05d4\u202c \u202b\u05db\u05df,\u202c \u202b\u05e2\u05dc\u202c \u202b\u05d0\u05e9\u05e8\u202c \u202b\u05e2\u05dc\u202c \u202b\u05d1\u05e4\u05d5\u05e2\u05dc\u202c \u202b\u05de\u05d0\u05e1\u05e8\u202c \u202b\u05e2\u05d5\u05e0\u05e9\u202c \u202b\u05dc\u05d4\u05d8\u05dc\u05ea\u202c \u202b\u05dc\u05de\u05d0\u05e1\u05e8\u202c \u202b\u05de\u05de\u05d5\u05e9\u05db\u05ea,\u202c \u202b\u05dc\u05ea\u05e7\u05d5\u05e4\u05d4\u202c \u202b\u05d4\u05e0\u05d0\u05e9\u05dd\u202c \u202b\u05de\u05e9\u05de\u05e2\u05d5\u05ea\u05d9\u202c \u202b\u05d5\u05dc\u05e4\u05d9\u05e6\u05d5\u05d9\u202c \u202b\u05ea\u05e0\u05d0\u05d9\u202c \u202b\u05e2\u05dc\u202c \u202b\u05dc\u05de\u05ea\u05dc\u05d5\u05e0\u05e0\u05ea.\u202c | Therefore, the prosecution requested to impose a pun-ishment of lengthy actual imprisonment, conditional imprisonment, and signifi-cant compensation to the | ||
victim. |