{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:00.609033Z" }, "title": "GerDaLIR: A German Dataset for Legal Information Retrieval", "authors": [ { "first": "Marco", "middle": [], "last": "Wrzalik", "suffix": "", "affiliation": { "laboratory": "", "institution": "RheinMain University of Applied Sciences", "location": { "country": "Germany" } }, "email": "" }, { "first": "Dirk", "middle": [], "last": "Krechel", "suffix": "", "affiliation": { "laboratory": "", "institution": "RheinMain University of Applied Sciences", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present GerDaLIR, a German Dataset for Legal Information Retrieval based on case documents from the open legal information platform Open Legal Data. The dataset consists of 123K queries, each labelled with at least one relevant document in a collection of 131K case documents. We conduct several baseline experiments including BM25 and a state-of-the-art neural re-ranker. With our dataset, we aim to provide a standardized benchmark for German LIR and promote open research in this area. Beyond that, our dataset comprises sufficient training data to be used as a downstream task for German or multilingual language models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present GerDaLIR, a German Dataset for Legal Information Retrieval based on case documents from the open legal information platform Open Legal Data. The dataset consists of 123K queries, each labelled with at least one relevant document in a collection of 131K case documents. We conduct several baseline experiments including BM25 and a state-of-the-art neural re-ranker. With our dataset, we aim to provide a standardized benchmark for German LIR and promote open research in this area. Beyond that, our dataset comprises sufficient training data to be used as a downstream task for German or multilingual language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "There are few non-English datasets dedicated to Natural Legal Language Processing (NLLP) or Legal Information Retrieval (LIR). To our knowledge, not a single dataset exists that provides a standardized benchmark for LIR models on the German language. To this end we contribute GerDaLIR, a legal document retrieval dataset comprising a large document collection and corresponding queries forming a document ranking task. We provide a large amount of training data such that both unsupervised and supervised methods can be benchmarked. This also enables GerDaLIR to be used as a downstream task for German or multilingual language models. The task provided is a precedent retrieval task. As illustrated in Figure 1 , we build GerDaLIR by extracting passages that reference other cases. For that we utilize 201,825 cases from the open legal information platform Open Legal Data (Ostendorff et al., 2020) . We present baseline experiments on classic term-based retrieval methods, a semantic search approach based on word embeddings and a transformer-based re-ranker giving an orientation to other researchers using our dataset. In contrast to other LIR datasets based on precedent case re- trieval, GerDaLIR offers the following unique characteristics: Large Corpus Size. With a total of 144K relevance labels for 123K query passages and a collection of 131K documents comprising over 3M passages, GerDaLIR is -to our knowledge -bigger than any other LIR dataset. Its size enables GerDaLIR to be used for full-ranking evaluation. German Language. The German language is GerDaLIR's most prominent feature and fills a gap in the community. Furthermore, in combination with the large training set, GerDaLIR can be used as a downstream task for German or multilingual language models, of which there are quite few. Query Passages. Most other LIR datasets based on precedent retrieval provide entire case documents to be used as queries. It may be unclear to which part of a given case relevant cases should be retrieved. As GerDaLIR provides query passages rather than whole documents, it better reflects a practical use case.", "cite_spans": [ { "start": 875, "end": 900, "text": "(Ostendorff et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 704, "end": 712, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The download links and descriptions to the format of GerDaLIR can be accessed via GitHub 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several sources of LIR datasets and tasks. In the following, we outline those based on precedent retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Datasets", "sec_num": "2" }, { "text": "COLIEE 2020 Task 1. The Competition on Legal Information Extraction/Entailment (COLIEE) is a workshop that annually provides a number of tasks in the areas of legal document retrieval, question answering and entailment. COLIEE 2020 Task 1 (Rabelo et al., 2020 ) is a re-ranking task that provides for training: \"520 base cases, each with 200 candidate cases from which the participants must identify those that should be noticed with respect to the base case\". From the total of 104,000 candidates, 2,680 are labelled as positive. For testing they provide 130 base cases, a total of 26,000 candidates and 646 positive labels. The case documents are written in English and originate from the Federal Court of Canada. The workshop also provides tasks derived from the Japanese jurisdiction including original Japanese texts as well as English translations.", "cite_spans": [ { "start": 239, "end": 259, "text": "(Rabelo et al., 2020", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Other Datasets", "sec_num": "2" }, { "text": "SigmaLaw. This dataset originates from the paper Legal Document Retrieval using Document Vector Embeddings and Deep Learning (Sugathadasa et al., 2018) . It comprises 2,500 case documents and a citation graph indicating for each case which of the other cases are considered relevant. FIRE 2017 IRLED. The FIRE 2017 IRLED Track presents a precedence retrieval task comprising 200 query cases, 2000 collection cases and 1000 positive relevance labels. The provided case documents originate from the Indian Supreme Court, which uses the English language in their proceedings.", "cite_spans": [ { "start": 125, "end": 151, "text": "(Sugathadasa et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Other Datasets", "sec_num": "2" }, { "text": "GerDaLIR is based on parsed references in German case documents taken from Open Legal Data. Although precedent cases have no binding effect in the German law system, references to prior cases are very common and arguably play a big role in supporting a line of argument. By definition, such referenced cases are relevant to the case at hand. With this in mind, the idea behind GerDaLIR's task is simple: Passages containing one or more references to known cases become queries while the referenced cases are labelled as relevant. However, if a passage is used as a query, the document the passage originates from should not be used as an retrievable collection document. To achieve that, we classify case documents -as depicted in Figure 1 -into the following classes: Unknown documents belong to cases that we have seen references to, but are not part of Open Legal Data. Collection documents comprise the cases that will be indexed for the retrieval task, which mainly consist of those that only refer to unknown documents. Query documents are the documents from which queries are sampled. Assigned to these are cases that contain references to collection documents. If a case document refers to other query cases, but not to collection cases, it is also classified as a collection document. The case documents are divided into passages along margin numbers. It regularly happens that the references to a passage follow with a margin number. Those passages typically start with Vgl. (\"compare\") or Siehe (\"see\"). We use these and more indicator words in the beginning of a passage to detect such referential passages and assign their references to the previous passage, which is assumed to contain the corresponding statement or line of argument. The text describing the references, however, is not added to the passage, since we want models to rely on natural language rather than exploiting references or parts of them. For this reason, we attempt to replace any reference including those to statutes with a [REF] token. However, a small portion of references that we were unable to parse remain in the text. From the final text, we also remove any braced content, since they mostly contain comprehensively described references that are difficult to sanitize otherwise. After that we collect all passages from the query documents that are marked with references to one or more collection documents. These form a set of multi-sentence queries, each with at least one label to a relevant collection document. Finally, we perform a 0.8/0.1/0.1 split on the queries for training, development and testing respectively. The resulting size of GerDaLIR's collection, the queries and the labels are summarized in Table 1. els used by them. The resulting measures also serve as an orientation to other researchers using the dataset. The methods considered are described below.", "cite_spans": [ { "start": 2012, "end": 2017, "text": "[REF]", "ref_id": null } ], "ref_spans": [ { "start": 731, "end": 739, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 2708, "end": 2716, "text": "Table 1.", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Dataset Generation", "sec_num": "3" }, { "text": "TF-IDF and BM25 are known as term-based or sparse retrieval methods. They are efficiently realized using inverted indexing. With that, they rely on exact term matches resulting in the tendency of missing relevant items. This tendency is often mitigated by employing a stemmer or lemmatizer normalizing each word to its base form. More detailed information can be found in Introduction to Information Retrieval by Manning et al. (2008) .", "cite_spans": [ { "start": 413, "end": 434, "text": "Manning et al. (2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "BM25 and TF-IDF", "sec_num": "4.1" }, { "text": "We introduce the Word Centroid Similarity (WCS), an unsupervised semantic textual similarity measure based on word embeddings (Mikolov et al., 2013) . Retrieval with WCS can be described as a dense retrieval method, since a dense representation is assigned to each query, document or passage. This vector is the centroid or mean vector of the embeddings of the words that occur in the given text. Based on the centroids, we calculate the relevance score using the cosine similarity measure. Aggregating word embeddings for the measurement of textual similarity has been studied in the past with various aggregation methods, word embedding models and vector similarity or distance measures (Kusner et al., 2015; Glasgow et al., 2016; R\u00fcckl\u00e9 et al., 2018; Landthaler et al., 2018) . We include WCS as a semantic search counterpart to the term-based retrieval methods. With that we also demonstrate how GerDaLIR can be used to evaluate word embeddings in terms of their utility to information retrieval.", "cite_spans": [ { "start": 126, "end": 148, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF10" }, { "start": 689, "end": 710, "text": "(Kusner et al., 2015;", "ref_id": "BIBREF5" }, { "start": 711, "end": 732, "text": "Glasgow et al., 2016;", "ref_id": "BIBREF4" }, { "start": 733, "end": 753, "text": "R\u00fcckl\u00e9 et al., 2018;", "ref_id": "BIBREF15" }, { "start": 754, "end": 778, "text": "Landthaler et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Word Centroid Similarity", "sec_num": "4.2" }, { "text": "We conduct neural re-ranking experiments with a simple binary relevance classifier based on Transformer Encoders (Vaswani et al., 2017 ) that follows BERT's cross-encoding design for sentence pair classification (Devlin et al., 2019) . Rankings result from the order of confidence with which given query-passage pairs are classified as relevant. Nogueira and Cho (2019) provide a more detailed description on the implementation of this model.", "cite_spans": [ { "start": 113, "end": 134, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF17" }, { "start": 212, "end": 233, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Re-ranking", "sec_num": "4.3" }, { "text": "In this section, we outline the experimental setup, describe the implementation of the methods described above, and briefly discuss the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We measure standard information retrieval metrics for the evaluation of our baseline models. With the mean reciprocal rank cut off at the tenth position (MRR@10) and the normalized discounted cumulative gain cut off at the twentieth position (nDCG@20), we measure the ranking quality on the top positions. MRR@10 only considers the first hit and penalizes strongly for each rank below rank one while nDCG@20 takes all positively labeled documents into account and penalizes softer. Complementary to the ranking quality measures, we measure recall cut off at positions 100 and 1000 to coarsely illustrate the distribution of positive documents and the portion of documents that were missed completely.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "5.1" }, { "text": "There are various good reasons to perform passage retrieval although the actual targets are documents. In our work the reason behind this is two-fold: First, depending on the model, it could result in better rankings. Second, the model at hand might not be able to process whole documents. The neural re-ranker we use is limited to input sequences of 512 tokens. To cast passage rankings to document rankings, we map passages back to the documents they originate from and perform max-pooling on the scores along documents. However, many documents are represented by multiple passages and after pooling the lengths of the resulting document rankings are smaller than the initial passage ranking. For this reason we retrieve 2000 passages although we only analyze top-1000 document rankings. For the re-ranking, however, we utilize only the first 1000 passages as candidates (including multiple document occurrences) and cast to document ranking afterwards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "With Passages to Document Rankings", "sec_num": "5.2" }, { "text": "We use Elasticsearch 2 to perform TF-IDF and BM25 retrieval. Its German analyzer includes a pre-processing pipeline that removes stop-words and performs stemming in accordance to the German language. BM25's parameters k1 and b are ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TF-IDF and BM25", "sec_num": "5.3" }, { "text": "Mode MRR@10 nDCG@20 Recall@100 Recall@1000 0.3, 1] respectively. For that we employ the Bayesian optimization algorithm provided by Optuna 3 , with 100 trials and nDCG@20 as the metric being optimized. The default parameters and those resulted from the tuning are listed in Table 2 . It is worth noting that for termbased retrieval methods, the document-wise retrieval (D) outperforms the passage-wise retrieval (P) as shown in Table 2 . We hypothesize that in relevant documents, the important keywords occur more frequently throughout the entire document while in other, non-relevant documents, they occur only marginally in a few passages. This can be exploited through the term frequency in documentwise retrieval.", "cite_spans": [ { "start": 43, "end": 50, "text": "0.3, 1]", "ref_id": null } ], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 428, "end": 435, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "We employ GloVe (Pennington et al., 2014) and fastText (Bojanowski et al., 2017) word embeddings in our WCS retrieval experiments. For that we train the embeddings based on the entire lowercased text from the cases in Open Legal Data, which comprises more than 465 million words. The training is performed using the original implementations for GloVe 4 and fastText 5 . For inference, we filter stopwords and normalize each word centroid to L2-norm before indexing in an faiss 6 inner product index with which a cosine similarity search index is realized. As shown in Table 2 , fastText slightly outperforms GloVe. We hypothesize fastText is favorable for the German language, since it virtually realizes a compound splitter: Fast-Text expands each word by character n-grams and calculates an aggregated representation using the n-grams and a representation for the entire word. Furthermore, if a word is out of vocabulary, there is a good chance of generating a meaningful representation using the character n-grams. The minimum and maximum size of those character n-gram are hyperparameters. We found that 5 for both minimum and maximum n-gram size perform best among various tested settings.", "cite_spans": [ { "start": 16, "end": 41, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF13" }, { "start": 55, "end": 80, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 568, "end": 575, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Word Centroid Similarity", "sec_num": "5.4" }, { "text": "In recent years many modifications to the BERT model and its training procedure have been proposed such as ALBERT (Lan et al., 2020) or RoBERTa (Liu et al., 2019) . The effectiveness gains of those modifications are often demonstrated on downstream tasks. In our neural reranking experiment, we compare a pretrained BERT model with a pretrained ELECTRA discriminator (Clark et al., 2020) . We use the \"base\" variants of BERT and ELECTRA trained by Chan et al. (2020) , which can be accessed via Hugging Face 7 with the identifiers deepset/gbert-base and deepset/gelectra-base. During finetuning we use top-100 BM25 rankings from which we randomly sample one negative candidate for each positive example. The pre-trained models are trained for 100 epochs on GerDaLIR's training data with a learning rate of 1e-4 and an effective batch size of 768 samples (e.g. batches of 16 samples and 48 gradient accumulation steps for each update step). The final models are tested based on top-1000 passage rankings from BM25 as candidates. To those, the score max-pooling is not applied, but on the final re-ranked rankings. Therefore, many documents are represented by multiple passages and the final rankings are much shorter than 1000 documents, which negatively affects recall@1000. Due to the sequence length limitation, document-wise retrieval can not directly be performed with BERT or ELECTRA. Passages that exceed this limitation are divided along sentence boundaries, and the maximum score is applied to the passage. As shown in Table 1 , the use of the ELECTRA model results in higher re-ranking quality in terms of MRR@10 and nDCG@20 compared to the BERT model, which is consistent with external experiments on other downstream tasks (Clark et al., 2020; Chan et al., 2020) .", "cite_spans": [ { "start": 114, "end": 132, "text": "(Lan et al., 2020)", "ref_id": "BIBREF6" }, { "start": 144, "end": 162, "text": "(Liu et al., 2019)", "ref_id": "BIBREF8" }, { "start": 367, "end": 387, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" }, { "start": 448, "end": 466, "text": "Chan et al. (2020)", "ref_id": "BIBREF1" }, { "start": 1734, "end": 1754, "text": "(Clark et al., 2020;", "ref_id": "BIBREF2" }, { "start": 1755, "end": 1773, "text": "Chan et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1527, "end": 1534, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Neural Re-ranking", "sec_num": "5.5" }, { "text": "We present GerDaLIR, a dataset filling the gap of a standardized IR benchmark for the German legal domain. We provide several baselines with which other researchers can compare their results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "Our experiments demonstrate the use of GerDaLIR as a downstream task for German or multilingual language models. In future work, we plan to investigate the importance of in-domain pre-training to neural LIR models. We also intend to explore unsupervised methods that effectively leverage language models for domain-specific information retrieval, as well as approaches combining these with traditional term-based retrieval methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "https://github.com/lavis-nlp/GerDaLIR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Baseline MethodsWe conduct a series of baseline experiments demonstrating that GerDaLIR can be used to benchmark retrieval methods or to evaluate the language mod-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.elastic.co/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://optuna.org/ 4 https://github.com/stanfordnlp/GloVe 5 https://fasttext.cc/ 6 https://faiss.ai/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/deepset/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tom\u00e1s", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Trans. Assoc. Comput. Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tom\u00e1s Mikolov. 2017. Enriching word vectors with subword information. Trans. Assoc. Comput. Lin- guistics, 5:135-146.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "German's next language model", "authors": [ { "first": "Branden", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Timo", "middle": [], "last": "M\u00f6ller", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "2020", "issue": "", "pages": "6788--6796", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.598" ] }, "num": null, "urls": [], "raw_text": "Branden Chan, Stefan Schweter, and Timo M\u00f6ller. 2020. German's next language model. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 6788- 6796. International Committee on Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "ELECTRA: pretraining text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pre- training text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Evaluating semantic models with word-sentence relatedness", "authors": [ { "first": "Kimberly", "middle": [], "last": "Glasgow", "suffix": "" }, { "first": "Matthew", "middle": [ "J" ], "last": "Roos", "suffix": "" }, { "first": "Amy", "middle": [ "J" ], "last": "Haufler", "suffix": "" }, { "first": "Mark", "middle": [ "A" ], "last": "Chevillet", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wolmetz", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimberly Glasgow, Matthew J. Roos, Amy J. Hau- fler, Mark A. Chevillet, and Michael Wolmetz. 2016. Evaluating semantic models with word-sentence re- latedness. CoRR, abs/1603.07253.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [ "J" ], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [ "I" ], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd In- ternational Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 957-966. JMLR.org.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semantic text matching of contract clauses and legal comments in tenancy law", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Landthaler", "suffix": "" }, { "first": "E", "middle": [], "last": "Glaser", "suffix": "" }, { "first": "F", "middle": [], "last": "Scepankova", "suffix": "" }, { "first": "", "middle": [], "last": "Matthes", "suffix": "" } ], "year": 2018, "venue": "Tagunsband IRIS: Internationales Rechtsinformatik Symposium", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3233/978-1-61499-935-5-200" ] }, "num": null, "urls": [], "raw_text": "J\u00f6rg Landthaler, I Glaser, E Scepankova, and F Matthes. 2018. Semantic text matching of contract clauses and legal comments in tenancy law. In Tagunsband IRIS: Internationales Rechtsinformatik Symposium.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Roberta: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Introduction to information retrieval", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Prabhakar", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1017/CBO9780511809071" ] }, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to information retrieval. Cambridge University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tom\u00e1s", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1s Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Human Language Tech- nologies: Conference of the North American Chap- ter of the Association of Computational Linguis- tics, Proceedings, June 9-14, 2013, Atlanta, Georgia, USA, pages 746-751. The Association for Computa- tional Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Passage re-ranking with BERT. CoRR", "authors": [ { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. CoRR, abs/1901.04085.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Towards an open platform for legal information", "authors": [ { "first": "Malte", "middle": [], "last": "Ostendorff", "suffix": "" }, { "first": "Till", "middle": [], "last": "Blume", "suffix": "" }, { "first": "Saskia", "middle": [], "last": "Ostendorff", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, JCDL '20", "volume": "", "issue": "", "pages": "385--388", "other_ids": { "DOI": [ "10.1145/3383583.3398616" ] }, "num": null, "urls": [], "raw_text": "Malte Ostendorff, Till Blume, and Saskia Ostendorff. 2020. Towards an open platform for legal informa- tion. In Proceedings of the ACM/IEEE Joint Con- ference on Digital Libraries in 2020, JCDL '20, page 385-388, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/d14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Inter- est Group of the ACL, pages 1532-1543. ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "COLIEE 2020: Methods for legal document retrieval and entailment", "authors": [ { "first": "Juliano", "middle": [], "last": "Rabelo", "suffix": "" }, { "first": "Mi-Young", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Randy", "middle": [], "last": "Goebel", "suffix": "" }, { "first": "Masaharu", "middle": [], "last": "Yoshioka", "suffix": "" }, { "first": "Yoshinobu", "middle": [], "last": "Kano", "suffix": "" }, { "first": "Ken", "middle": [], "last": "Satoh", "suffix": "" } ], "year": 2020, "venue": "New Frontiers in Artificial Intelligence -JSAI-isAI 2020 Workshops, JURISIN, LENLS 2020 Workshops, Virtual Event", "volume": "12758", "issue": "", "pages": "196--210", "other_ids": { "DOI": [ "10.1007/978-3-030-79942-7_13" ] }, "num": null, "urls": [], "raw_text": "Juliano Rabelo, Mi-Young Kim, Randy Goebel, Masa- haru Yoshioka, Yoshinobu Kano, and Ken Satoh. 2020. COLIEE 2020: Methods for legal docu- ment retrieval and entailment. In New Frontiers in Artificial Intelligence -JSAI-isAI 2020 Workshops, JURISIN, LENLS 2020 Workshops, Virtual Event, November 15-17, 2020, Revised Selected Papers, volume 12758 of Lecture Notes in Computer Sci- ence, pages 196-210. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Concatenated p-mean word embeddings as universal cross-lingual sentence representations", "authors": [ { "first": "Andreas", "middle": [], "last": "R\u00fcckl\u00e9", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas R\u00fcckl\u00e9, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. 2018. Concatenated p-mean word embeddings as universal cross-lingual sentence rep- resentations. CoRR, abs/1803.01400.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Legal document retrieval using document vector embeddings and deep learning", "authors": [ { "first": "Keet", "middle": [], "last": "Sugathadasa", "suffix": "" }, { "first": "Buddhi", "middle": [], "last": "Ayesha", "suffix": "" }, { "first": "Amal", "middle": [ "Shehan" ], "last": "Nisansa De Silva", "suffix": "" }, { "first": "Vindula", "middle": [], "last": "Perera", "suffix": "" }, { "first": "Dimuthu", "middle": [], "last": "Jayawardana", "suffix": "" }, { "first": "Madhavi", "middle": [], "last": "Lakmal", "suffix": "" }, { "first": "", "middle": [], "last": "Perera", "suffix": "" } ], "year": 2018, "venue": "Science and information conference", "volume": "", "issue": "", "pages": "160--175", "other_ids": { "DOI": [ "10.1007/978-3-030-01177-2_12" ] }, "num": null, "urls": [], "raw_text": "Keet Sugathadasa, Buddhi Ayesha, Nisansa de Silva, Amal Shehan Perera, Vindula Jayawardana, Dimuthu Lakmal, and Madhavi Perera. 2018. Legal document retrieval using document vector embed- dings and deep learning. In Science and information conference, pages 160-175. Springer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "GerDaLIR comprises query-document pairs from passages that cite known collection documents.", "type_str": "figure", "num": null }, "TABREF0": { "text": "GerDaLIR's dataset size", "content": "
Documents Passages
Collection131,4463,095,383
TrainDevTest
Queries98,380 12,297 12,298
Pos. Labels 115,360 14,570 14,394
", "type_str": "table", "html": null, "num": null }, "TABREF1": { "text": "Baseline measures. Mode P and D denote passage-wise and document-wise retrieval (Section 5.2).", "content": "", "type_str": "table", "html": null, "num": null } } } }