{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:46:48.612632Z" }, "title": "JuriBERT: A Masked-Language Model Adaptation for French Legal Text", "authors": [ { "first": "Stella", "middle": [], "last": "Douka", "suffix": "", "affiliation": { "laboratory": "", "institution": "HEC Paris", "location": {} }, "email": "" }, { "first": "Hadi", "middle": [], "last": "Abdine", "suffix": "", "affiliation": { "laboratory": "", "institution": "HEC Paris", "location": {} }, "email": "" }, { "first": "\u00c9cole", "middle": [], "last": "Polytechnique", "suffix": "", "affiliation": { "laboratory": "", "institution": "HEC Paris", "location": {} }, "email": "" }, { "first": "Michalis", "middle": [], "last": "Vazirgiannis", "suffix": "", "affiliation": { "laboratory": "", "institution": "HEC Paris", "location": {} }, "email": "" }, { "first": "Rajaa", "middle": [], "last": "El", "suffix": "", "affiliation": { "laboratory": "", "institution": "HEC Paris", "location": {} }, "email": "" }, { "first": "David", "middle": [], "last": "Restrepo", "suffix": "", "affiliation": { "laboratory": "", "institution": "HEC Paris", "location": {} }, "email": "" }, { "first": "Amariles", "middle": [ "Hec" ], "last": "Paris", "suffix": "", "affiliation": { "laboratory": "", "institution": "HEC Paris", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Language models have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a language model adapted to French legal text with the goal of helping law professionals. We conclude that some specific tasks do not benefit from generic language models pre-trained on large amounts of data. We explore the use of smaller architectures in domain-specific sub-languages and their benefits for French legal text. We prove that domain-specific pre-trained models can perform better than their equivalent generalised ones in the legal domain. Finally, we release JuriBERT, a new set of BERT models adapted to the French legal domain.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Language models have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a language model adapted to French legal text with the goal of helping law professionals. We conclude that some specific tasks do not benefit from generic language models pre-trained on large amounts of data. We explore the use of smaller architectures in domain-specific sub-languages and their benefits for French legal text. We prove that domain-specific pre-trained models can perform better than their equivalent generalised ones in the legal domain. Finally, we release JuriBERT, a new set of BERT models adapted to the French legal domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Domain-specific language models have evolved the way we learn and use text representations in natural language processing. Instead of using general purpose pre-trained models that are highly skewed towards generic language, we can now pre-train models that better meet our needs and are highly adapted to specific domains, like medicine and law. In order to achieve that, models are trained on large scale raw text data, which is a computationally expensive step, and then are used in many downstream evaluation tasks, achieving state-of-the-art results in multiple explored domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The majority of domain-specific language models so far are applied to the English language. Abdine et al. (2021) published French word vectors from large scale generic web content that surpassed previous non pre-trained word embeddings. Furthermore, Martin et al. (2020) introduced Camem-BERT, a monolingual language model for French, that is used for generic everyday text, and proved its superiority in comparison with other multilingual models. In the meantime, domain-specific language models for French are in lack. There is an even greater shortage when it comes to the legal field. Sulea et al. (2017) mentioned the importance of using state-of-the-art technologies to support law professionals and provide them with guidance and orientation. Given this need, we introduce Ju-riBERT, a new set of BERT models pre-trained on French legal text. We explore the use of smaller models architecturally when we are dealing with very specific sub-languages, like French legal text. Thus, we publicly release JuriBERT 1 in 4 different sizes online.", "cite_spans": [ { "start": 92, "end": 112, "text": "Abdine et al. (2021)", "ref_id": "BIBREF0" }, { "start": 250, "end": 270, "text": "Martin et al. (2020)", "ref_id": "BIBREF9" }, { "start": 589, "end": 608, "text": "Sulea et al. (2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work on domain-specific text data has indicated the importance of creating domain-specific language models. These models are either adaptations of existing generalised models, for example Bert Base by Devlin et al. (2019) trained on general purpose English corpora, or pre-trained from scratch on new data. In both cases, domain-specific text corpora are used to adjust the model to the peculiarities of each domain.", "cite_spans": [ { "start": 210, "end": 230, "text": "Devlin et al. (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A remarkable example of adapting language models is the research done by who introduced BioBERT, a domain-specific language representation model pre-trained on large scale biomedical corpora. BioBERT outperformed BERT and other previous models on many biomedical text mining tasks and showed that pre-training on specific biomedical corpora improves performance in the field. Similar results were presented by Beltagy et al. (2019) that introduced SciBERT and showed that pre-training on scientific-related corpus improves performance in multiple domains, and by Yang et al. (2020) who showed that Fin-BERT, pre-trained on financial communication cor-pora, can outperform BERT on three financial sentiment classification tasks.", "cite_spans": [ { "start": 410, "end": 431, "text": "Beltagy et al. (2019)", "ref_id": "BIBREF2" }, { "start": 563, "end": 581, "text": "Yang et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Moving on to the legal domain, Bambroo and Awasthi (2021) worked on LegalDB, a DistilBERT model (Sanh et al., 2019) pre-trained on English legal-domain specific corpora. LegalDB outperformed BERT at legal document classification. Elwany et al. (2019) also proved that pre-training BERT can improve classification tasks in the legal domain and showed that acquiring large scale English legal corpora can provide a major advantage in legal-related tasks such as contract classification. Furthermore, Chalkidis et al. (2020) introduced LegalBERT, a family of English BERT models, that outperformed BERT on a variety of datasets in text classification and sequence tagging. Their work also showed that an architecturally large model may not be necessary when dealing with domainspecific sub-languages. A representative example is Legal-BERT-Small that is highly competitive with larger versions of LegalBert. We intent to further explore this theory with even smaller models.", "cite_spans": [ { "start": 96, "end": 115, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF10" }, { "start": 230, "end": 250, "text": "Elwany et al. (2019)", "ref_id": "BIBREF6" }, { "start": 498, "end": 521, "text": "Chalkidis et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Despite the increasing use of domain-specific models, we have mainly been limited to the English language. On the contrary, in the French language, little work has been done on the application of text classification methods to support law professionals, with the exception of Sulea et al. (2017) that managed to achieve state-of-the-art results in three legal-domain classification tasks. It is also worth mentioning Garneau et al. (2021) who introduced CriminelBART, a fine-tuned version of BARThez (Eddine et al., 2020) . CriminelBART is specialised in criminal law by using French Canadian legal judgments. All in all, no previous work has adapted a BERT model in the legal domain using French legal text.", "cite_spans": [ { "start": 276, "end": 295, "text": "Sulea et al. (2017)", "ref_id": "BIBREF11" }, { "start": 417, "end": 438, "text": "Garneau et al. (2021)", "ref_id": "BIBREF7" }, { "start": 500, "end": 521, "text": "(Eddine et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In order to evaluate our models we will be using two legal text classification tasks provided by the Court of Cassation, the highest court of the French judicial order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream Evaluation Tasks", "sec_num": "3" }, { "text": "The subject of the first task is assigning the Court's Claimant's pleadings, \"m\u00e8moires ampliatifs\" in French, to a chamber and a section of the Court. This leads to a multi-class classification task with 8 different imbalanced classes. In Table 1 we can see the eight classes that correspond to the different chambers and sections of the Court, as well as their support in the data. for Commercial Law, Banking and Credit Law and others. Each chamber has two or more sections dealing with different topics. The second task is to classify the Claiment's pleadings to a set of 151 subjects, \"mati\u00e8res\" as stated in French. Figure 5 in appendix shows the support of the mati\u00e8res in the data. As we can see in Figure 1 the 10 recessive mati\u00e8res have between 7 to 1 examples in our dataset. We decided to remove the last 3 mati\u00e8res as they have less than 3 examples and therefore it is not possible to split them in train, test and development sets.", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 246, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 621, "end": 629, "text": "Figure 5", "ref_id": null }, { "start": 706, "end": 714, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Downstream Evaluation Tasks", "sec_num": "3" }, { "text": "We introduce a new set of BERT models pre-trained from scratch in legal-domain specific corpora. We train our models on the Masked Language Modeling (MLM) task. This means that given an input text sequence we mask tokens with 15% prob-ability and the model is then trained to predict these masked tokens. We follow the example of Chalkidis et al. (2020) and choose to train significantly even smaller models, including Bert-Tiny and Bert-Mini. The architectural details of the models we pre-trained are presented in Table 2 . We also choose to further pre-train CamemBERT Base on French legal text in order to better explore the impact of using domain-specific corpora in pretraining.", "cite_spans": [ { "start": 330, "end": 353, "text": "Chalkidis et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 516, "end": 523, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "JuriBERT", "sec_num": "4" }, { "text": "For the pre-training we used two different French legal text datasets. The first dataset contains data crawled 2 from the L\u00e9gifrance 3 website and consists of raw French Legal text. The L\u00e9gifrance text is then cleaned from non French characters. We also use the Court's decisions and the Claimant's pleadings from the Court of Cassation that consists of 123361 long documents from different court cases. All personal and private information, including names and organizations, has been removed from the documents for the privacy of the stakeholders. The combined datasets provide us with a collection of raw French legal text of size 6.3 GB that we will use to pre-train our models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": null }, { "text": "In order to pre-train a new BERT model from scratch we need a new Tokenizer. We trained a ByteLevelBPE Tokenizer with newly created vocabulary from the training corpus. The vocabulary is restricted to 32,000 tokens in order to be comparable to the CamemBERT model from Martin et al. (2020) and minimum token frequency of 2. We used a RobertaTokenizer as a template to include all the necessary special tokens for a Masked Language Model. Our new Legal Tokenizer encodes the data using 512-sized embeddings.", "cite_spans": [ { "start": 269, "end": 289, "text": "Martin et al. (2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Legal Tokenizer", "sec_num": null }, { "text": "For the pre-training of the JuriBERT Model we used both the crawled L\u00e9gifrance data and the Pleadings Dataset, thus creating a 6.3GB collection of legal texts. The encoded corpus was then used to pre-train a BERT model from scratch. Our model was pre-trained in 4 different architectures. As a result we have JuriBERT Tiny with 2 layers,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "JuriBERT", "sec_num": null }, { "text": "Architecture Params JuriBERT Tiny L=2, H=128, A=2 6M JuriBERT Mini L=4, H=256, A=4 15M JuriBERT Small L=6, H=512, A=8 42M JuriBERT Base L=12, H=768, A=12 110M JuriBERT-FP L=12, H=768, A=12 110M ", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 163, "text": "Params JuriBERT Tiny L=2, H=128, A=2 6M JuriBERT Mini L=4, H=256, A=4 15M JuriBERT Small L=6, H=512, A=8 42M JuriBERT Base L=12, H=768, A=12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We also pre-trained a task-specific model that is expected to perform better in the classification of the Claiment's pleadings. For that we used only the Pleadings Dataset for the pre-training that is a 4GB collection of legal documents. The taskspecific JuriBERT model for the Cour de Cassation task was pre-trained in 2 architectures, JuriBERT Tiny (L=2, H=128, A=2) and JuriBERT Mini (L=4, H=256, A=4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Specific JuriBERT", "sec_num": null }, { "text": "Apart from pre-training from scratch we decided to also further pre-train CamemBERT Base on the training data. Our goal is to compare its performance with the original JuriBERT model to further explore the impact of using specific-domain corpora during pre-training. JuriBERT-FP uses the same architecture as CamemBERT Base and JuriB-ERT Base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "JuriBERT-FP", "sec_num": null }, { "text": "Pre-training Details All the models were pretrained for 1M steps. A learning rate of 1e \u2212 4 was used along with an Adam optimizer (\u03b2 1 =0.9, \u03b2 2 =0.999) with weight decay of 0.1 and a linear scheduler with 10,000 warm-up steps. All the models were pre-trained with batch size of 8 expect for JuriBERT Base and JuriBERT-FP that used batches of size 4. For the pre-training we used an Nvidia GTX 1080Ti GPU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "5" }, { "text": "Pre-training Corpora CamemBERT 138GB BARThez 66GB JuriBERT 6.3GB JuriBERT-FP 6.3GB Task JuriBERT 4GB Fine-tuning Details Our models were fine-tuned on the downstream evaluation task using the same classification head as Devlin et al. (2019) that consists of a Dense layer with tanh function followed by a Dense layer with softmax activation function and Dropout layers with fixed dropout rate of 0.1. We applied grid-search to the learning rate on a range of {2e \u2212 5, 3e \u2212 5, 4e \u2212 5, 5e \u2212 5}. We used an Adam optimizer along with a linear scheduler that provided the training with 100k warm-up steps. We train for a maximum of 30 epochs with patience of 2 epochs on the early stopping callback and checkpoints for the best model. For the classification we use only the paragraphs starting with 'ALORS QUE' from the Pleadings Dataset, as they include all the important information for the correct chamber and section. This was suggested by a lawyer from the Court of Cassation as the average size of a m\u00e8moire ampliatif is extremely big, from 10 to 30 pages long. By using the 'ALORS QUE' paragraphs we have text sequences with average size of 800 tokens. For the chambers and sections classification task we split the data in 14% development and 16% test data. For the mati\u00e8res classification we split the data in 17% development and 14% test data and stratify in order to have all classes represented in each subset. Both tasks use a fixed batch size of 4. For the fine-tuning we used an Nvidia GTX 1080Ti GPU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "The results on the downstream evaluation tasks are presented in Tables 4 and 5 . We compare our models with two CamemBERT versions, Base and Large, and with BARThez, a sequence-to-sequence model dedicated to the French language. Camem-BERT has been pre-trained on 138GB of French raw text from the OSCAR corpus. Despite the difference in pre-training corpora size, with our model using only 6.3GB of legal text, JuriBERT Small managed to outperform both CamemBERT Base and CamemBERT Large. This further proves the importance of domain-specific language models in natural language processing and transfer learning. Despite our expectations, the performance of JuriBERT Base does not exceed the performance of its smaller equivalent models. We attribute this peculiarity in the usage of smaller batch sizes when pre-training JuriBERT Base and also the fact that larger models usually need more computational resources and more time and data in order to converge.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 78, "text": "Tables 4 and 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "JuriBERT Small also outperforms BARThez on the chambers and sections evaluation task, which is pre-trained on 66GB of French raw text and usually used for generative tasks. On the mati\u00e8res classification task BARThez is the dominant model with JuriBERT Small being second. We infer that the complexity of the second task benefits more from the robustness and size of BARThez than from the specific-domain nature of JuriBERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "Comparing our models with the same architectures, it becomes apparent that all task-specific Ju-riBERT models perform better than their equivalent domain-specific JuriBERT models besides using only 4GB of pre-training data. The results confirm that a BERT model pre-trained from scratch only on the corpus that is then used for fine-tuning can perform better than a domain-specific one on the same task as we expected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "JuriBERT-FP outperforms JuriBERT Base and achieves similar results with CamemBERT Base on the chambers and sections classification task. This shows that further pre-training a general purpose language model can have better results than training from scratch. However, it did not manage to outperform JuriBERT Small in both tasks, which can be attributed to the smaller batch size used during pre-training and to the size of the model as mentioned before for JuriBERT Base. Unfortunately, there are no smaller versions of Camem-BERT available to further test this theory. On the mati\u00e8res classification task, JuriBERT-FP still outperforms JuriBERT Base. On the contrary, it performs worse than CamemBERT Base. Along with the state-of-the-art results of BARThez, this leads us to believe that in order to achieve better results in more complex tasks JuriBERT models require more pre-training corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "All in all, JuriBERT Small achieves equivalent results with previous larger generic language models with an accuracy of 83.95% on the first task and 71.80% on the second task on the test data. Ju-riBERT Small, JuriBERT Mini and even JuriBERT Tiny all outperform JuriBERT Base, proving that smaller models architecturally can achieve comparable, if not better, results when we are training on very domain-specific data. A larger model, not only requires more resources to be trained, but is also not as efficient as its smaller equivalents. This is of major importance for researchers with limited resources available. Furthermore, JuriBERT-FP achieves better results than JuriBERT Base in both tasks. This leads us to infer that pre-training from an existing language model can be a major advantage, as opposed to randomly initialising our weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "As we mentioned before both JuriBERT Base and JuriBERT-FP have been pre-trained using smaller batch sizes than the other models due to limited resources. We acknowledge that this may have affected their performance compared to the other models. However, we believe that their lower performance can also be attributed to their size as larger models are computationally heavier and thus require more resources to converge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "7" }, { "text": "Acquiring large scale legal corpora, especially for a language other than English, has proven to be challenging due to their confidential nature. For this reason, JuriBERT models were fine-tuned on two downstream evaluation tasks that contain data from the pre-training dataset collection. Further testing shall be required in order to validate the performance of our models on different tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "7" }, { "text": "The differences in performance between the generic language models and the newly created JuriBERT models are very small. More specifically, only JuriBERT Small manages to outperform CamemBERT Base and Barthez with a difference in accuracy of 0.73%. We attribute this limitation in the use of much less pre-training data. However we emphasize that JuriBERT manages to achieve similar results despite the difference in pre-training corpora size. Thus, we expect JuriBERT to achieve better results in the future provided that we further pre-train with more data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "7" }, { "text": "We introduce a new set of domain-specific Bert Models pre-trained from scratch on French legal text. We conclude that our task is very specific and as a result it does not benefit from general purpose models like CamemBERT. We also show the superiority of much smaller models when training on very specific sub-languages like legal text. It becomes apparent that large architectures may in fact not be necessary when the targeted sub-language is very specific. This is important for researchers with lower resources available, as smaller models are fine-tuned a lot faster on the downstream tasks. Furthermore, we show that a BERT model pre-trained from scratch on task-specific data and then finetuned on this very task can perform better than a domain-specific model that has been pre-trained on a lot more data. We point out of course that a domain-specific model can outperform a taskspecific one on other tasks and is generally preferred when we need a multi-purpose BERT model with many applications in the French legal domain. In future work, we plan to further explore the potential of JuriBERT in other tasks and as a result prove its superiority over the task-specific one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "8" }, { "text": "You can find the models in http:// master2-bigdata.polytechnique.fr/ FrenchLinguisticResources/resources# juribert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used Heritrix, a crawler that respects the robots.txt exclusion directives and META nofollow tags. See https: //github.com/internetarchive/heritrix33 https://www.legifrance.gouv.fr/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluation of word embeddings from large-scale french web content", "authors": [ { "first": "Christos", "middle": [], "last": "Hadi Abdine", "suffix": "" }, { "first": "", "middle": [], "last": "Xypolopoulos", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hadi Abdine, Christos Xypolopoulos, Moussa Kamal Eddine, and Michalis Vazirgiannis. 2021. Evalu- ation of word embeddings from large-scale french web content. CoRR, abs/2105.01990.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "LegalDB: Long DistilBERT for legal document classification", "authors": [ { "first": "Purbid", "middle": [], "last": "Bambroo", "suffix": "" }, { "first": "Aditi", "middle": [], "last": "Awasthi", "suffix": "" } ], "year": 2021, "venue": "2021 International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT)", "volume": "", "issue": "", "pages": "1--4", "other_ids": { "DOI": [ "10.1109/ICAECT49130.2021.9392558" ] }, "num": null, "urls": [], "raw_text": "Purbid Bambroo and Aditi Awasthi. 2021. LegalDB: Long DistilBERT for legal document classification. In 2021 International Conference on Advances in Electrical, Computing, Communication and Sustain- able Technologies (ICAECT), pages 1-4.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "SciB-ERT: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3615--3620", "other_ids": { "DOI": [ "10.18653/v1/D19-1371" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos", "authors": [ { "first": "Ilias", "middle": [], "last": "Chalkidis", "suffix": "" }, { "first": "Manos", "middle": [], "last": "Fergadiotis", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2898--2904", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.261" ] }, "num": null, "urls": [], "raw_text": "Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2898- 2904, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BARThez: a skilled pretrained French sequence-to-sequence model", "authors": [ { "first": "Antoine J.-P", "middle": [], "last": "Moussa Kamal Eddine", "suffix": "" }, { "first": "Michalis", "middle": [], "last": "Tixier", "suffix": "" }, { "first": "", "middle": [], "last": "Vazirgiannis", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moussa Kamal Eddine, Antoine J.-P. Tixier, and Michalis Vazirgiannis. 2020. BARThez: a skilled pretrained French sequence-to-sequence model. CoRR, abs/2010.12321.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT goes to law school: Quantifying the competitive advantage of access to large legal corpora in contract understanding", "authors": [ { "first": "Emad", "middle": [], "last": "Elwany", "suffix": "" }, { "first": "Dave", "middle": [], "last": "Moore", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Oberoi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emad Elwany, Dave Moore, and Gaurav Oberoi. 2019. BERT goes to law school: Quantifying the compet- itive advantage of access to large legal corpora in contract understanding. CoRR, abs/1911.00473.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "CriminelBart: A French Canadian legal language model specialized in criminal law", "authors": [ { "first": "Nicolas", "middle": [], "last": "Garneau", "suffix": "" }, { "first": "Eve", "middle": [], "last": "Gaumond", "suffix": "" }, { "first": "Luc", "middle": [], "last": "Lamontagne", "suffix": "" }, { "first": "Pierre-Luc", "middle": [], "last": "D\u00e9ziel", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, ICAIL '21", "volume": "", "issue": "", "pages": "256--257", "other_ids": { "DOI": [ "10.1145/3462757.3466147" ] }, "num": null, "urls": [], "raw_text": "Nicolas Garneau, Eve Gaumond, Luc Lamontagne, and Pierre-Luc D\u00e9ziel. 2021. CriminelBart: A French Canadian legal language model specialized in crim- inal law. In Proceedings of the Eighteenth Interna- tional Conference on Artificial Intelligence and Law, ICAIL '21, page 256-257, New York, NY, USA. As- sociation for Computing Machinery.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. CoRR, abs/1901.08746.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot", "authors": [ { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Yoann", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7203--7219", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.645" ] }, "num": null, "urls": [], "raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary, \u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Exploring the use of text classification in the legal domain", "authors": [ { "first": "Octavia-Maria", "middle": [], "last": "Sulea", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Vela", "suffix": "" }, { "first": "P", "middle": [], "last": "Liviu", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Octavia-Maria Sulea, Marcos Zampieri, Shervin Mal- masi, Mihaela Vela, Liviu P. Dinu, and Josef van Genabith. 2017. Exploring the use of text classifi- cation in the legal domain. CoRR, abs/1710.09306.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Finbert: A pretrained language model for financial communications", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Christopher Siy", "suffix": "" }, { "first": "Allen", "middle": [], "last": "Uy", "suffix": "" }, { "first": "", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Yang, Mark Christopher Siy Uy, and Allen Huang. 2020. Finbert: A pretrained language model for fi- nancial communications. CoRR, abs/2006.08097.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
ClassSupport
CO28 198
C1_Section1 14 650
C1_Section2 16 730
C2_Section1 11 525
C2_Section29 975
C2_Section3 13 736
C3_Section1 16 176
C3_Section2 12 282
", "type_str": "table", "html": null, "num": null, "text": "The classes represent" }, "TABREF1": { "content": "
7
6
5
Class Support3 4
2
1
0Matieres MESINS EPHP COMPUB FONDA JURID FONCT ABSEN EPHS PRISE ECAP
Figure 1: 10 Recessive Matieres
", "type_str": "table", "html": null, "num": null, "text": "Chambers and Sections of the Court of Cassation and data support" }, "TABREF2": { "content": "
: Architectural comparison of JuriBERT models
128 hidden units and 2 attention heads (6M param-
eters), JuriBERT Mini with 4 layers, 256 hidden
units and 4 attention heads (15M parameters), Ju-
riBERT Small with 6 layers, 512 hidden units and
8 attention heads (42M parameters) and JuriBERT
Base with 12 layers, 768 hidden units and 12 at-
tention heads (110M parameters). JuriBERT Base
uses the exact same architecture as CamemBERT
Base.
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF3": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Size of pre-training corpora used by different models" }, "TABREF5": { "content": "
: Accuracy of models on the chambers and sec-
tions classification task
ModelLrateDevTest
CamemBERT Base 3e \u2212 5 71.64 71.66
BARThez2e \u2212 5 72.17 72.09
JuriBERT Small2e \u2212 5 71.67 71.80
JuriBERT Base3e \u2212 5 70.28 70.38
JuriBERT-FP2e \u2212 5 70.99 71.21
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF6": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Accuracy of models on the matieres classification task" } } } }