parquet-converter commited on
Commit
f95a696
1 Parent(s): fbf9bb8

Update parquet files

Browse files
.gitattributes CHANGED
@@ -14,3 +14,18 @@
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
17
+ pairs/german_legal_sentences-train-00001-of-00002.parquet filter=lfs diff=lfs merge=lfs -text
18
+ pairs/german_legal_sentences-train-00000-of-00002.parquet filter=lfs diff=lfs merge=lfs -text
19
+ pairs/german_legal_sentences-validation.parquet filter=lfs diff=lfs merge=lfs -text
20
+ pairs/german_legal_sentences-test.parquet filter=lfs diff=lfs merge=lfs -text
21
+ pairs+es/german_legal_sentences-train-00003-of-00006.parquet filter=lfs diff=lfs merge=lfs -text
22
+ pairs+es/german_legal_sentences-train-00005-of-00006.parquet filter=lfs diff=lfs merge=lfs -text
23
+ pairs+es/german_legal_sentences-train-00002-of-00006.parquet filter=lfs diff=lfs merge=lfs -text
24
+ pairs+es/german_legal_sentences-train-00000-of-00006.parquet filter=lfs diff=lfs merge=lfs -text
25
+ pairs+es/german_legal_sentences-train-00004-of-00006.parquet filter=lfs diff=lfs merge=lfs -text
26
+ pairs+es/german_legal_sentences-train-00001-of-00006.parquet filter=lfs diff=lfs merge=lfs -text
27
+ pairs+es/german_legal_sentences-validation.parquet filter=lfs diff=lfs merge=lfs -text
28
+ pairs+es/german_legal_sentences-test.parquet filter=lfs diff=lfs merge=lfs -text
29
+ sentences/german_legal_sentences-train.parquet filter=lfs diff=lfs merge=lfs -text
30
+ sentences/german_legal_sentences-validation.parquet filter=lfs diff=lfs merge=lfs -text
31
+ sentences/german_legal_sentences-test.parquet filter=lfs diff=lfs merge=lfs -text
README.md DELETED
@@ -1,186 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- language_creators:
5
- - found
6
- language:
7
- - de
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - n>1M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text-retrieval
18
- - text-scoring
19
- task_ids:
20
- - semantic-similarity-scoring
21
- - text-retrieval-other-example-based-retrieval
22
- ---
23
-
24
- # Dataset Card for German Legal Sentences
25
-
26
- ## Table of Contents
27
- - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
28
- - [Table of Contents](#table-of-contents)
29
- - [Dataset Description](#dataset-description)
30
- - [Dataset Summary](#dataset-summary)
31
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
- - [Languages](#languages)
33
- - [Dataset Structure](#dataset-structure)
34
- - [Data Instances](#data-instances)
35
- - [Data Fields](#data-fields)
36
- - [Data Splits](#data-splits)
37
- - [Dataset Creation](#dataset-creation)
38
- - [Curation Rationale](#curation-rationale)
39
- - [Source Data](#source-data)
40
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
- - [Who are the source language producers?](#who-are-the-source-language-producers)
42
- - [Annotations](#annotations)
43
- - [Annotation process](#annotation-process)
44
- - [Who are the annotators?](#who-are-the-annotators)
45
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
- - [Considerations for Using the Data](#considerations-for-using-the-data)
47
- - [Social Impact of Dataset](#social-impact-of-dataset)
48
- - [Discussion of Biases](#discussion-of-biases)
49
- - [Other Known Limitations](#other-known-limitations)
50
- - [Additional Information](#additional-information)
51
- - [Dataset Curators](#dataset-curators)
52
- - [Licensing Information](#licensing-information)
53
- - [Citation Information](#citation-information)
54
- - [Contributions](#contributions)
55
-
56
- ## Dataset Description
57
-
58
- - **Homepage:** https://lavis-nlp.github.io/german_legal_sentences/
59
- - **Repository:** https://github.com/lavis-nlp/german_legal_sentences
60
- - **Paper:** coming soon
61
- - **Leaderboard:**
62
- - **Point of Contact:** [Marco Wrzalik](mailto:marco.wrzalik@hs-rm.de)
63
-
64
- ### Dataset Summary
65
-
66
- German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342).
67
-
68
- ### Supported Tasks and Leaderboards
69
-
70
- The main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:
71
-
72
- | Method | MRR@10 | MAP@200 | Recall@200 |
73
- |-----------------------------------|---------:|-----------:|------------:|
74
- | BM25 - default `(k1=1.2; b=0.75)` | 25.7 | 17.6 | 42.9 |
75
- | BM25 - tuned `(k1=0.47; b=0.97)` | 26.2 | 18.1 | 43.3 |
76
- | [CoRT](https://arxiv.org/abs/2010.10252) | 31.2 | 21.4 | 56.2 |
77
- | [CoRT + BM25](https://arxiv.org/abs/2010.10252) | 32.1 | 22.1 | 67.1 |
78
-
79
- In addition, we want to support a *Citation Recommendation* task in the future.
80
-
81
- If you wish to contribute evaluation measures or give any suggestion or critique, please write an [e-mail](mailto:marco.wrzalik@hs-rm.de).
82
-
83
- ### Languages
84
-
85
- This dataset contains texts from the specific domain of German court decisions.
86
-
87
- ## Dataset Structure
88
-
89
- ### Data Instances
90
-
91
- ```
92
- {'query.doc_id': 28860,
93
- 'query.ref_ids': [6215, 248, 248],
94
- 'query.sent_id': 304863,
95
- 'query.text': 'Zudem ist zu berücksichtigen , dass die Vollverzinsung nach '
96
- '[REF] i. V. m. [REF] gleichermaßen zugunsten wie zulasten des '
97
- 'Steuerpflichtigen wirkt , sodass bei einer Überzahlung durch '
98
- 'den Steuerpflichtigen der Staat dem Steuerpflichtigen neben '
99
- 'der Erstattung ebenfalls den entstandenen potentiellen Zins- '
100
- 'und Liquiditätsnachteil in der pauschalierten Höhe des [REF] '
101
- 'zu ersetzen hat , unabhängig davon , in welcher Höhe dem '
102
- 'Berechtigten tatsächlich Zinsen entgangen sind .',
103
- 'related.doc_id': 56348,
104
- 'related.ref_ids': [248, 6215, 62375],
105
- 'related.sent_id': 558646,
106
- 'related.text': 'Ferner ist zu berücksichtigen , dass der Zinssatz des [REF] '
107
- 'im Rahmen des [REF] sowohl für Steuernachforderung wie auch '
108
- 'für Steuererstattungen und damit gleichermaßen zugunsten wie '
109
- 'zulasten des Steuerpflichtigen wirkt , Vgl. BVerfG , '
110
- 'Nichtannahmebeschluss vom [DATE] [REF] , juris , mit der '
111
- 'Folge , dass auch Erstattungsansprüche unabhängig davon , ob '
112
- 'und in welcher Höhe dem Berechtigten tatsächlich Zinsen '
113
- 'entgangen sind , mit monatlich 0,0 % verzinst werden .'}
114
- ```
115
-
116
- ### Data Fields
117
-
118
- [More Information Needed]
119
-
120
- ### Data Splits
121
-
122
- [More Information Needed]
123
-
124
- ## Dataset Creation
125
-
126
- ### Curation Rationale
127
-
128
- [More Information Needed]
129
-
130
- ### Source Data
131
-
132
- #### Initial Data Collection and Normalization
133
-
134
- The documents we take from [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342) are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, "§211 Absatz 1 des Strafgesetzbuches" is normalized to "§ 211 Abs. 1 StGB". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g `[REF321]`). At the same time we parse dates and replace them with the date tag `[DATE]`. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.
135
-
136
- We use [SoMaJo](https://github.com/tsproisl/SoMaJo) to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.
137
-
138
- #### Who are the source language producers?
139
-
140
- The source language originates in the context of German court proceedings.
141
-
142
- ### Annotations
143
-
144
- #### Annotation process
145
-
146
- [More Information Needed]
147
-
148
- #### Who are the annotators?
149
-
150
- The annotations are machine-generated.
151
-
152
- ### Personal and Sensitive Information
153
-
154
- The source documents are already public and anonymized.
155
-
156
- ## Considerations for Using the Data
157
-
158
- ### Social Impact of Dataset
159
-
160
- With this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.
161
-
162
- ### Discussion of Biases
163
-
164
- [More Information Needed]
165
-
166
- ### Other Known Limitations
167
-
168
- [More Information Needed]
169
-
170
- ## Additional Information
171
-
172
- ### Dataset Curators
173
-
174
- [More Information Needed]
175
-
176
- ### Licensing Information
177
-
178
- [More Information Needed]
179
-
180
- ### Citation Information
181
-
182
- Coming soon!
183
-
184
- ### Contributions
185
-
186
- Thanks to [@mwrzalik](https://github.com/mwrzalik) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"sentences": {"description": "German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence \nmatching in the domain in german legal documents. It follows the concept of weak supervision, where \nimperfect labels are generated using multiple heuristics. For this purpose we use a combination of \nlegal citation matching and BM25 similarity. The contained sentences and their citations are parsed \nfrom real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/)\n", "citation": "coming soon\n", "homepage": "", "license": "", "features": {"sent_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "doc_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "references": {"feature": {"ref_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "name": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"num_classes": 2, "names": ["AZ", "LAW"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "german_legal_sentences", "config_name": "sentences", "version": {"version_str": "0.0.2", "description": "", "major": 0, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 470336071, "num_examples": 1542499, "dataset_name": "german_legal_sentences"}, "validation": {"name": "validation", "num_bytes": 26119884, "num_examples": 85375, "dataset_name": "german_legal_sentences"}, "test": {"name": "test", "num_bytes": 26082080, "num_examples": 85405, "dataset_name": "german_legal_sentences"}}, "download_checksums": {"http://lavis.cs.hs-rm.de/storage/german-legal-sentences/GermanLegalSentences_v0.0.2.zip": {"num_bytes": 289263658, "checksum": "57ec7c5ba6c800383bee938cd979305d064163585a5b2fc4f46ae385e0973a1f"}}, "download_size": 289263658, "post_processing_size": null, "dataset_size": 522538035, "size_in_bytes": 811801693}, "pairs": {"description": "German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence \nmatching in the domain in german legal documents. It follows the concept of weak supervision, where \nimperfect labels are generated using multiple heuristics. For this purpose we use a combination of \nlegal citation matching and BM25 similarity. The contained sentences and their citations are parsed \nfrom real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/)\n", "citation": "coming soon\n", "homepage": "", "license": "", "features": {"query.sent_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "query.doc_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "query.text": {"dtype": "string", "id": null, "_type": "Value"}, "query.ref_ids": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "related.sent_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "related.doc_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "related.text": {"dtype": "string", "id": null, "_type": "Value"}, "related.ref_ids": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "german_legal_sentences", "config_name": "pairs", "version": {"version_str": "0.0.2", "description": "", "major": 0, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 754039911, "num_examples": 1404271, "dataset_name": "german_legal_sentences"}, "validation": {"name": "validation", "num_bytes": 42311363, "num_examples": 78472, "dataset_name": "german_legal_sentences"}, "test": {"name": "test", "num_bytes": 41120928, "num_examples": 76626, "dataset_name": "german_legal_sentences"}}, "download_checksums": {"http://lavis.cs.hs-rm.de/storage/german-legal-sentences/GermanLegalSentences_v0.0.2.zip": {"num_bytes": 289263658, "checksum": "57ec7c5ba6c800383bee938cd979305d064163585a5b2fc4f46ae385e0973a1f"}}, "download_size": 289263658, "post_processing_size": null, "dataset_size": 837472202, "size_in_bytes": 1126735860}, "pairs+es": {"description": "German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence \nmatching in the domain in german legal documents. It follows the concept of weak supervision, where \nimperfect labels are generated using multiple heuristics. For this purpose we use a combination of \nlegal citation matching and BM25 similarity. The contained sentences and their citations are parsed \nfrom real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/)\n", "citation": "coming soon\n", "homepage": "", "license": "", "features": {"query.sent_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "query.doc_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "query.text": {"dtype": "string", "id": null, "_type": "Value"}, "query.ref_ids": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "related.sent_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "related.doc_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "related.text": {"dtype": "string", "id": null, "_type": "Value"}, "related.ref_ids": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "es_neighbors.text": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "es_neighbors.sent_id": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "es_neighbors.doc_id": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "es_neighbors.ref_ids": {"feature": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "german_legal_sentences", "config_name": "pairs+es", "version": {"version_str": "0.0.2", "description": "", "major": 0, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 2543172549, "num_examples": 1396670, "dataset_name": "german_legal_sentences"}, "validation": {"name": "validation", "num_bytes": 128326675, "num_examples": 69765, "dataset_name": "german_legal_sentences"}, "test": {"name": "test", "num_bytes": 123911313, "num_examples": 67569, "dataset_name": "german_legal_sentences"}}, "download_checksums": {"http://lavis.cs.hs-rm.de/storage/german-legal-sentences/GermanLegalSentences_v0.0.2.zip": {"num_bytes": 289263658, "checksum": "57ec7c5ba6c800383bee938cd979305d064163585a5b2fc4f46ae385e0973a1f"}}, "download_size": 289263658, "post_processing_size": null, "dataset_size": 2795410537, "size_in_bytes": 3084674195}}
 
 
german_legal_sentences.py DELETED
@@ -1,285 +0,0 @@
1
- import random
2
-
3
- from pathlib import Path
4
- import datasets
5
- from datasets import Value, Sequence, ClassLabel, Features
6
-
7
- _CITATION = """\
8
- coming soon
9
- """
10
-
11
- _DESCRIPTION = """\
12
- German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence
13
- matching in the domain in german legal documents. It follows the concept of weak supervision, where
14
- imperfect labels are generated using multiple heuristics. For this purpose we use a combination of
15
- legal citation matching and BM25 similarity. The contained sentences and their citations are parsed
16
- from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/)
17
- """
18
-
19
- _VERSION = "0.0.2"
20
- _DATA_URL = f"http://lavis.cs.hs-rm.de/storage/german-legal-sentences/GermanLegalSentences_v{_VERSION}.zip"
21
-
22
-
23
- class GLSConfig(datasets.BuilderConfig):
24
- """BuilderConfig."""
25
-
26
- def __init__(
27
- self,
28
- load_collection,
29
- load_es_neighbors=None,
30
- n_es_neighbors=None,
31
- **kwargs,
32
- ):
33
- """BuilderConfig.
34
- Args:
35
- **kwargs: keyword arguments forwarded to super.
36
- """
37
- super(GLSConfig, self).__init__(**kwargs)
38
- self.load_collection = load_collection
39
- self.load_es_neighbors = load_es_neighbors
40
- self.n_es_neighbors = n_es_neighbors
41
-
42
-
43
- class GermanLegalSentences(datasets.GeneratorBasedBuilder):
44
- BUILDER_CONFIGS = [
45
- GLSConfig(
46
- name="sentences",
47
- load_es_neighbors=False,
48
- load_collection=False,
49
- version=datasets.Version(_VERSION, ""),
50
- description="Just the sentences and their masked references",
51
- ),
52
- GLSConfig(
53
- name="pairs",
54
- load_es_neighbors=False,
55
- load_collection=True,
56
- version=datasets.Version(_VERSION, ""),
57
- description="Sentence pairs sharing references",
58
- ),
59
- GLSConfig(
60
- name="pairs+es",
61
- load_es_neighbors=True,
62
- load_collection=True,
63
- n_es_neighbors=5,
64
- version=datasets.Version(_VERSION, ""),
65
- description="Sentence pairs sharing references plus ES neighbors",
66
- ),
67
- ]
68
-
69
- def _features(self):
70
- if self.config.name == "sentences":
71
- return datasets.Features(
72
- {
73
- "sent_id": Value("uint32"),
74
- "doc_id": Value("uint32"),
75
- "text": Value("string"),
76
- "references": Sequence(
77
- {
78
- "ref_id": Value("uint32"),
79
- "name": Value("string"),
80
- "type": ClassLabel(names=["AZ", "LAW"]),
81
- }
82
- ),
83
- }
84
- )
85
- elif self.config.name == "pairs":
86
- return Features(
87
- {
88
- "query.sent_id": Value("uint32"),
89
- "query.doc_id": Value("uint32"),
90
- "query.text": Value("string"),
91
- "query.ref_ids": Sequence(Value("uint32")),
92
- "related.sent_id": Value("uint32"),
93
- "related.doc_id": Value("uint32"),
94
- "related.text": Value("string"),
95
- "related.ref_ids": Sequence(Value("uint32")),
96
- }
97
- )
98
- elif self.config.name == "pairs+es":
99
- return Features(
100
- {
101
- "query.sent_id": Value("uint32"),
102
- "query.doc_id": Value("uint32"),
103
- "query.text": Value("string"),
104
- "query.ref_ids": Sequence(Value("uint32")),
105
- "related.sent_id": Value("uint32"),
106
- "related.doc_id": Value("uint32"),
107
- "related.text": Value("string"),
108
- "related.ref_ids": Sequence(Value("uint32")),
109
- "es_neighbors.text": Sequence(Value("string")),
110
- "es_neighbors.sent_id": Sequence(Value("uint32")),
111
- "es_neighbors.doc_id": Sequence(Value("uint32")),
112
- "es_neighbors.ref_ids": Sequence(
113
- Sequence(datasets.Value("uint32"))
114
- ),
115
- }
116
- )
117
- assert True
118
-
119
- def _info(self):
120
- return datasets.DatasetInfo(
121
- description=_DESCRIPTION,
122
- features=self._features(),
123
- supervised_keys=None,
124
- homepage="",
125
- citation=_CITATION,
126
- )
127
-
128
- def _split_generators(self, dl_manager):
129
- if dl_manager.manual_dir:
130
- data_dir = Path(dl_manager.manual_dir)
131
- else:
132
- data_dir = Path(dl_manager.download_and_extract(_DATA_URL))
133
- collection = _load_collection(data_dir) if self.config.load_collection else None
134
- sent_ref_map = _load_sent_references(data_dir)
135
- references = (
136
- _load_reference_info(data_dir) if self.config.name == "sentences" else None
137
- )
138
- es_neighbors = (
139
- _load_es_neighbors(data_dir) if self.config.load_es_neighbors else None
140
- )
141
-
142
- gen_kwargs = dict()
143
- for split in ("train", "valid", "test"):
144
- gen_kwargs[split] = {
145
- "collection": collection,
146
- "pair_id_file": data_dir / f"{split}.pairs.tsv",
147
- "sentence_file": data_dir / f"{split}.sentences.tsv",
148
- "references": references,
149
- "sent_ref_map": sent_ref_map,
150
- "es_neighbors": es_neighbors,
151
- }
152
- return [
153
- datasets.SplitGenerator(
154
- name=datasets.Split.TRAIN, gen_kwargs=gen_kwargs["train"]
155
- ),
156
- datasets.SplitGenerator(
157
- name=datasets.Split.VALIDATION, gen_kwargs=gen_kwargs["valid"]
158
- ),
159
- datasets.SplitGenerator(
160
- name=datasets.Split.TEST, gen_kwargs=gen_kwargs["test"]
161
- ),
162
- ]
163
-
164
- def _generate_examples(self, **kwargs):
165
- if self.config.name.startswith("pairs"):
166
- yield from self._generate_pairs(**kwargs)
167
- elif self.config.name == "sentences":
168
- yield from self._generate_sentences(**kwargs)
169
- else:
170
- assert True
171
-
172
- def _generate_pairs(
173
- self, pair_id_file, collection, sent_ref_map, es_neighbors, **kwargs
174
- ):
175
- random.seed(17)
176
- with open(pair_id_file, encoding="utf-8") as r:
177
- idx = 0
178
- for line in r:
179
- stripped = line.rstrip()
180
- if stripped:
181
- a, b = stripped.split("\t")
182
- features = {
183
- "query.sent_id": int(a),
184
- "query.doc_id": int(collection[a]["doc_id"]),
185
- "query.text": collection[a]["text"],
186
- "query.ref_ids": sent_ref_map[a],
187
- "related.sent_id": int(b),
188
- "related.doc_id": int(collection[b]["doc_id"]),
189
- "related.text": collection[b]["text"],
190
- "related.ref_ids": sent_ref_map[b],
191
- }
192
- if self.config.name == "pairs+es":
193
- curr_es_neighbors = es_neighbors.get(a) or []
194
- if len(curr_es_neighbors) < self.config.n_es_neighbors:
195
- continue
196
-
197
- es_sent_ids = random.sample(
198
- curr_es_neighbors, k=self.config.n_es_neighbors
199
- )
200
- additional_features = {
201
- "es_neighbors.sent_id": [int(i) for i in es_sent_ids],
202
- "es_neighbors.doc_id": [
203
- int(collection[i]["doc_id"]) for i in es_sent_ids
204
- ],
205
- "es_neighbors.text": [
206
- collection[i]["text"] for i in es_sent_ids
207
- ],
208
- "es_neighbors.ref_ids": [
209
- sent_ref_map[i] for i in es_sent_ids
210
- ],
211
- }
212
- features.update(additional_features)
213
- yield idx, features
214
- idx += 1
215
-
216
- def _generate_sentences(
217
- self,
218
- sentence_file,
219
- references,
220
- sent_ref_map,
221
- **kwargs,
222
- ):
223
- with open(sentence_file, encoding="utf-8") as r:
224
- for idx, line in enumerate(r):
225
- stripped = line.rstrip()
226
- if stripped == "":
227
- continue
228
- s_id, doc_id, text = stripped.split("\t", maxsplit=2)
229
- yield idx, {
230
- "sent_id": int(s_id),
231
- "doc_id": int(doc_id),
232
- "text": text,
233
- "references": [
234
- {
235
- "ref_id": int(r_id),
236
- "name": references[r_id][1],
237
- "type": references[r_id][0],
238
- }
239
- for r_id in sent_ref_map[s_id]
240
- ],
241
- }
242
-
243
-
244
- def _load_collection(data_dir):
245
- collection = dict()
246
- for split in ("train", "valid", "test"):
247
- with open(data_dir / f"{split}.sentences.tsv", encoding="utf-8") as r:
248
- for line in r:
249
- s_id, d_id, sent = line.strip().split("\t", maxsplit=2)
250
- collection[s_id] = {"doc_id": d_id, "text": sent}
251
- return collection
252
-
253
-
254
- def _load_reference_info(data_dir):
255
- with open(data_dir / "refs.tsv", encoding="utf-8") as r:
256
- references = {
257
- r_id: (r_type, r_name.rstrip())
258
- for r_id, r_type, r_name in (
259
- line.split("\t", maxsplit=2) for line in r if len(line) > 2
260
- )
261
- }
262
-
263
- return references
264
-
265
-
266
- def _load_sent_references(data_dir):
267
- with open(data_dir / "sent_ref_map.tsv", encoding="utf-8") as r:
268
- sent_ref_map = {
269
- s_id: r_ids.rstrip().split()
270
- for s_id, r_ids in (
271
- line.split("\t", maxsplit=1) for line in r if len(line) > 2
272
- )
273
- }
274
- return sent_ref_map
275
-
276
-
277
- def _load_es_neighbors(data_dir):
278
- with open(data_dir / "es_neighbors.tsv", encoding="utf-8") as r:
279
- es_neighbors = {
280
- s_id: other_s_ids.rstrip().split()
281
- for s_id, other_s_ids in (
282
- line.split("\t", maxsplit=1) for line in r if len(line) > 2
283
- )
284
- }
285
- return es_neighbors
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
pairs+es/german_legal_sentences-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83c8c25358770c544b342e8d54ac81927baec3ae828c21c0585d933e20a4d1fc
3
+ size 47844504
pairs+es/german_legal_sentences-train-00000-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53b3f1d89e13be279f4ab9f4cd8c68b8f304ccca3df28c09cf35584bf3f5f1f0
3
+ size 145586262
pairs+es/german_legal_sentences-train-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7879dc5984c05e6c53a367eed214ca19fe6b5415c62b6d3a74494d18dda7768
3
+ size 142565537
pairs+es/german_legal_sentences-train-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84c9686be6469018da4df3b862f1bd858cebf8e4fd2b4ac1c4dcf14ea4cfb081
3
+ size 147929304
pairs+es/german_legal_sentences-train-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57ededabde8ae4f99908d2191c6d4759118d7c98de3a775f26650103527c96b5
3
+ size 155381619
pairs+es/german_legal_sentences-train-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65f4bd2d620b1adf35078c24d265a77ef01a494f6ec17cec0c3f3411a9b0f365
3
+ size 165843896
pairs+es/german_legal_sentences-train-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:505b048d68f9069032c2033661fee0d712005e9ab055ebb6dced540c72789215
3
+ size 16172022
pairs+es/german_legal_sentences-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d51f8433fb525afd76d4dda2aff60fa07d4a7acb55a804a6b7c20a245b5033c2
3
+ size 49213428
pairs/german_legal_sentences-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c474545a9a08553e021646d8e8738fbada1531643f54cdfe94f88121c107d802
3
+ size 11233367
pairs/german_legal_sentences-train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eeafc81e0e81fd3f73461c93fa5c09e57e43dfdaacbe41afcba98ea68cff7081
3
+ size 80990173
pairs/german_legal_sentences-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af84883aba011408b32c7b65cb88c950efdf42b6412155eb0fcf9e3d8cc24e4f
3
+ size 45307302
pairs/german_legal_sentences-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8223f1b831736ca8e9001bee43d6269acd2b41ce2fe7226a571421c3b4d71dd
3
+ size 11440806
sentences/german_legal_sentences-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6307a54ced9b9b7f1ed03417adcb45ade149b308c08befe768ea7236314a7f3
3
+ size 12943167
sentences/german_legal_sentences-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91bf7b312ea9843a4dd2c1651332d12194bd2ff064ee785f340e1d0898046e02
3
+ size 233268726
sentences/german_legal_sentences-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac592778e57cf68aae9ccc6a88cde5e3df81f1790cdc9f6de9972409bb648d76
3
+ size 12945125