parquet-converter commited on
Commit
27e92b6
·
1 Parent(s): bbb2a01

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,44 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.npy filter=lfs diff=lfs merge=lfs -text
13
- *.npz filter=lfs diff=lfs merge=lfs -text
14
- *.onnx filter=lfs diff=lfs merge=lfs -text
15
- *.ot filter=lfs diff=lfs merge=lfs -text
16
- *.parquet filter=lfs diff=lfs merge=lfs -text
17
- *.pb filter=lfs diff=lfs merge=lfs -text
18
- *.pickle filter=lfs diff=lfs merge=lfs -text
19
- *.pkl filter=lfs diff=lfs merge=lfs -text
20
- *.pt filter=lfs diff=lfs merge=lfs -text
21
- *.pth filter=lfs diff=lfs merge=lfs -text
22
- *.rar filter=lfs diff=lfs merge=lfs -text
23
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
24
- *.tar.* filter=lfs diff=lfs merge=lfs -text
25
- *.tflite filter=lfs diff=lfs merge=lfs -text
26
- *.tgz filter=lfs diff=lfs merge=lfs -text
27
- *.wasm filter=lfs diff=lfs merge=lfs -text
28
- *.xz filter=lfs diff=lfs merge=lfs -text
29
- *.zip filter=lfs diff=lfs merge=lfs -text
30
- *.zstandard filter=lfs diff=lfs merge=lfs -text
31
- *tfevents* filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - uncompressed
33
- *.pcm filter=lfs diff=lfs merge=lfs -text
34
- *.sam filter=lfs diff=lfs merge=lfs -text
35
- *.raw filter=lfs diff=lfs merge=lfs -text
36
- # Audio files - compressed
37
- *.aac filter=lfs diff=lfs merge=lfs -text
38
- *.flac filter=lfs diff=lfs merge=lfs -text
39
- *.mp3 filter=lfs diff=lfs merge=lfs -text
40
- *.ogg filter=lfs diff=lfs merge=lfs -text
41
- *.wav filter=lfs diff=lfs merge=lfs -text
42
- test.jsonl filter=lfs diff=lfs merge=lfs -text
43
- train.jsonl filter=lfs diff=lfs merge=lfs -text
44
- validation.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,349 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - other
4
- language_creators:
5
- - found
6
- language:
7
- - multilingual
8
- - bg
9
- - cs
10
- - da
11
- - de
12
- - el
13
- - en
14
- - es
15
- - et
16
- - fi
17
- - fr
18
- - ga
19
- - hu
20
- - it
21
- - lt
22
- - lv
23
- - mt
24
- - nl
25
- - pt
26
- - ro
27
- - sk
28
- - sv
29
- license:
30
- - cc-by-4.0
31
- multilinguality:
32
- - multilingual
33
- size_categories:
34
- - 1K<n<10K
35
- source_datasets:
36
- - original
37
- task_categories:
38
- - token-classification
39
- task_ids:
40
- - named-entity-recognition
41
- pretty_name: Spanish Datasets for Sensitive Entity Detection in the Legal Domain
42
- tags:
43
- - named-entity-recognition-and-classification
44
- ---
45
-
46
- # Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain
47
-
48
- ## Table of Contents
49
-
50
- - [Table of Contents](#table-of-contents)
51
- - [Dataset Description](#dataset-description)
52
- - [Dataset Summary](#dataset-summary)
53
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
54
- - [Languages](#languages)
55
- - [Dataset Structure](#dataset-structure)
56
- - [Data Instances](#data-instances)
57
- - [Data Fields](#data-fields)
58
- - [Data Splits](#data-splits)
59
- - [Dataset Creation](#dataset-creation)
60
- - [Curation Rationale](#curation-rationale)
61
- - [Source Data](#source-data)
62
- - [Annotations](#annotations)
63
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
64
- - [Considerations for Using the Data](#considerations-for-using-the-data)
65
- - [Social Impact of Dataset](#social-impact-of-dataset)
66
- - [Discussion of Biases](#discussion-of-biases)
67
- - [Other Known Limitations](#other-known-limitations)
68
- - [Additional Information](#additional-information)
69
- - [Dataset Curators](#dataset-curators)
70
- - [Licensing Information](#licensing-information)
71
- - [Citation Information](#citation-information)
72
- - [Contributions](#contributions)
73
-
74
- ## Dataset Description
75
-
76
- - **Homepage:**
77
- - **
78
- Repository:** [Spanish](https://elrc-share.eu/repository/browse/mapa-anonymization-package-spanish/b550e1a88a8311ec9c1a00155d026706687917f92f64482587c6382175dffd76/), [Most](https://elrc-share.eu/repository/search/?q=mfsp:3222a6048a8811ec9c1a00155d0267067eb521077db54d6684fb14ce8491a391), [German, Portuguese, Slovak, Slovenian, Swedish](https://elrc-share.eu/repository/search/?q=mfsp:833df1248a8811ec9c1a00155d0267067685dcdb77064822b51cc16ab7b81a36)
79
- - **Paper:** de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive
80
- Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June,
81
- 3751–3760. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.400.pdf
82
- - **Leaderboard:**
83
- - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
84
-
85
- ### Dataset Summary
86
-
87
- The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court
88
- decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated
89
- for named entities following the guidelines of the [MAPA project]( https://mapa-project.eu/) which foresees two
90
- annotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification.
91
-
92
- ### Supported Tasks and Leaderboards
93
-
94
- The dataset supports the task of Named Entity Recognition and Classification (NERC).
95
-
96
- ### Languages
97
-
98
- The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv
99
-
100
- ## Dataset Structure
101
-
102
- ### Data Instances
103
-
104
- The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are
105
- non-overlapping.
106
-
107
- ### Data Fields
108
-
109
- For the annotation the documents have been split into sentences. The annotations has been done on the token level.
110
- The files contain the following data fields
111
-
112
- - `language`: language of the sentence
113
- - `type`: The document type of the sentence. Currently, only EUR-LEX is supported.
114
- - `file_name`: The document file name the sentence belongs to.
115
- - `sentence_number`: The number of the sentence inside its document.
116
- - `tokens`: The list of tokens in the sentence.
117
- - `coarse_grained`: The coarse-grained annotations for each token
118
- - `fine_grained`: The fine-grained annotations for each token
119
-
120
-
121
- As previously stated, the annotation has been conducted on a global and a more fine-grained level.
122
-
123
- The tagset used for the global and the fine-grained named entities is the following:
124
-
125
- - Address
126
- - Building
127
- - City
128
- - Country
129
- - Place
130
- - Postcode
131
- - Street
132
- - Territory
133
- - Amount
134
- - Unit
135
- - Value
136
- - Date
137
- - Year
138
- - Standard Abbreviation
139
- - Month
140
- - Day of the Week
141
- - Day
142
- - Calender Event
143
- - Person
144
- - Age
145
- - Email
146
- - Ethnic Category
147
- - Family Name
148
- - Financial
149
- - Given Name – Female
150
- - Given Name – Male
151
- - Health Insurance Number
152
- - ID Document Number
153
- - Initial Name
154
- - Marital Status
155
- - Medical Record Number
156
- - Nationality
157
- - Profession
158
- - Role
159
- - Social Security Number
160
- - Title
161
- - Url
162
- - Organisation
163
- - Time
164
- - Vehicle
165
- - Build Year
166
- - Colour
167
- - License Plate Number
168
- - Model
169
- - Type
170
-
171
- The final coarse grained tagset (in IOB notation) is the following:
172
-
173
- `['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']`
174
-
175
-
176
- The final fine grained tagset (in IOB notation) is the following:
177
-
178
- `[
179
- 'O',
180
- 'B-BUILDING',
181
- 'I-BUILDING',
182
- 'B-CITY',
183
- 'I-CITY',
184
- 'B-COUNTRY',
185
- 'I-COUNTRY',
186
- 'B-PLACE',
187
- 'I-PLACE',
188
- 'B-TERRITORY',
189
- 'I-TERRITORY',
190
- 'I-UNIT',
191
- 'B-UNIT',
192
- 'B-VALUE',
193
- 'I-VALUE',
194
- 'B-YEAR',
195
- 'I-YEAR',
196
- 'B-STANDARD ABBREVIATION',
197
- 'I-STANDARD ABBREVIATION',
198
- 'B-MONTH',
199
- 'I-MONTH',
200
- 'B-DAY',
201
- 'I-DAY',
202
- 'B-AGE',
203
- 'I-AGE',
204
- 'B-ETHNIC CATEGORY',
205
- 'I-ETHNIC CATEGORY',
206
- 'B-FAMILY NAME',
207
- 'I-FAMILY NAME',
208
- 'B-INITIAL NAME',
209
- 'I-INITIAL NAME',
210
- 'B-MARITAL STATUS',
211
- 'I-MARITAL STATUS',
212
- 'B-PROFESSION',
213
- 'I-PROFESSION',
214
- 'B-ROLE',
215
- 'I-ROLE',
216
- 'B-NATIONALITY',
217
- 'I-NATIONALITY',
218
- 'B-TITLE',
219
- 'I-TITLE',
220
- 'B-URL',
221
- 'I-URL',
222
- 'B-TYPE',
223
- 'I-TYPE',
224
- ]`
225
-
226
-
227
- ### Data Splits
228
-
229
- Splits created by Joel Niklaus.
230
-
231
-
232
- | language | # train files | # validation files | # test files | # train sentences | # validation sentences | # test sentences |
233
- |:-----------|----------------:|---------------------:|---------------:|--------------------:|-------------------------:|-------------------:|
234
- | bg | 9 | 1 | 2 | 1411 | 166 | 560 |
235
- | cs | 9 | 1 | 2 | 1464 | 176 | 563 |
236
- | da | 9 | 1 | 2 | 1455 | 164 | 550 |
237
- | de | 9 | 1 | 2 | 1457 | 166 | 558 |
238
- | el | 9 | 1 | 2 | 1529 | 174 | 584 |
239
- | en | 9 | 1 | 2 | 893 | 98 | 408 |
240
- | es | 7 | 1 | 1 | 806 | 248 | 155 |
241
- | et | 9 | 1 | 2 | 1391 | 163 | 516 |
242
- | fi | 9 | 1 | 2 | 1398 | 187 | 531 |
243
- | fr | 9 | 1 | 2 | 1297 | 97 | 490 |
244
- | ga | 9 | 1 | 2 | 1383 | 165 | 515 |
245
- | hu | 9 | 1 | 2 | 1390 | 171 | 525 |
246
- | it | 9 | 1 | 2 | 1411 | 162 | 550 |
247
- | lt | 9 | 1 | 2 | 1413 | 173 | 548 |
248
- | lv | 9 | 1 | 2 | 1383 | 167 | 553 |
249
- | mt | 9 | 1 | 2 | 937 | 93 | 442 |
250
- | nl | 9 | 1 | 2 | 1391 | 164 | 530 |
251
- | pt | 9 | 1 | 2 | 1086 | 105 | 390 |
252
- | ro | 9 | 1 | 2 | 1480 | 175 | 557 |
253
- | sk | 9 | 1 | 2 | 1395 | 165 | 526 |
254
- | sv | 9 | 1 | 2 | 1453 | 175 | 539 |
255
-
256
- ## Dataset Creation
257
-
258
- ### Curation Rationale
259
-
260
- *„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the
261
- present contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and
262
- evaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted
263
- anonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022)
264
-
265
- ### Source Data
266
-
267
- #### Initial Data Collection and Normalization
268
-
269
- The dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further
270
- information on the data collection process are given in de Gibert Bonet et al. (2022).
271
-
272
- #### Who are the source language producers?
273
-
274
- The source language producers are presumably lawyers.
275
-
276
- ### Annotations
277
-
278
- #### Annotation process
279
-
280
- *"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme
281
- described in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...)
282
- and level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex,
283
- CPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using
284
- INCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium."* (de Gibert
285
- Bonet et al., 2022)
286
-
287
- #### Who are the annotators?
288
-
289
- Only one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022).
290
-
291
- ### Personal and Sensitive Information
292
-
293
- [More Information Needed]
294
-
295
- ## Considerations for Using the Data
296
-
297
- ### Social Impact of Dataset
298
-
299
- [More Information Needed]
300
-
301
- ### Discussion of Biases
302
-
303
- [More Information Needed]
304
-
305
- ### Other Known Limitations
306
-
307
- Note that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al.
308
- (2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available.
309
-
310
- Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
311
- Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
312
- consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
313
- dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
314
- differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
315
- have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
316
- original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
317
- the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
318
-
319
- ## Additional Information
320
-
321
- ### Dataset Curators
322
-
323
- The names of the original dataset curators and creators can be found in references given below, in the section *Citation
324
- Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch)
325
- ; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch)
326
- ; [Github](https://github.com/kapllan)).
327
-
328
- ### Licensing Information
329
-
330
- [Attribution 4.0 International (CC BY 4.0) ](https://creativecommons.org/licenses/by/4.0/)
331
-
332
- ### Citation Information
333
-
334
- ```
335
- @article{DeGibertBonet2022,
336
- author = {{de Gibert Bonet}, Ona and {Garc{\'{i}}a Pablos}, Aitor and Cuadros, Montse and Melero, Maite},
337
- journal = {Proceedings of the Language Resources and Evaluation Conference},
338
- number = {June},
339
- pages = {3751--3760},
340
- title = {{Spanish Datasets for Sensitive Entity Detection in the Legal Domain}},
341
- url = {https://aclanthology.org/2022.lrec-1.400},
342
- year = {2022}
343
- }
344
- ```
345
-
346
- ### Contributions
347
-
348
- Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this
349
- dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
convert_to_hf_dataset.py DELETED
@@ -1,220 +0,0 @@
1
- import os
2
- from glob import glob
3
- from pathlib import Path
4
-
5
- import numpy as np
6
- import pandas as pd
7
-
8
- from web_anno_tsv import open_web_anno_tsv
9
- from web_anno_tsv.web_anno_tsv import ReadException, Annotation
10
-
11
- pd.set_option('display.max_colwidth', None)
12
- pd.set_option('display.max_columns', None)
13
-
14
- annotation_labels = {'ADDRESS': ['building', 'city', 'country', 'place', 'postcode', 'street', 'territory'],
15
- 'AMOUNT': ['unit', 'value'],
16
- 'DATE': ['year', 'standard abbreviation', 'month', 'day of the week', 'day', 'calender event'],
17
- 'PERSON': ['age', 'email', 'ethnic category', 'family name', 'financial', 'given name – female',
18
- 'given name – male',
19
- 'health insurance number', 'id document number', 'initial name', 'marital status',
20
- 'medical record number',
21
- 'nationality', 'profession', 'role', 'social security number', 'title', 'url'],
22
- 'ORGANISATION': [],
23
- 'TIME': [],
24
- 'VEHICLE': ['build year', 'colour', 'license plate number', 'model', 'type']}
25
-
26
- # make all labels upper case
27
- annotation_labels = {key.upper(): [label.upper() for label in labels] for key, labels in annotation_labels.items()}
28
- print(annotation_labels)
29
- print("coarse_grained:", list(annotation_labels.keys()))
30
- print("fine_grained:",
31
- [finegrained for finegrained in [finegrained_list for finegrained_list in annotation_labels.values()]])
32
-
33
- base_path = Path("extracted")
34
-
35
- # TODO future work can add these datasets too to make it larger
36
- special_paths = {
37
- "EL": ["EL/ANNOTATED_DATA/LEGAL/AREIOSPAGOS1/annotated/full_dataset"],
38
- "EN": ["EN/ANNOTATED_DATA/ADMINISTRATIVE-LEGAL/annotated/full_dataset"],
39
- "FR": ["FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION1/annotated/full_dataset/Civil",
40
- "FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION1/annotated/full_dataset/Commercial",
41
- "FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION1/annotated/full_dataset/Criminal",
42
- "FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION2/annotated/full_dataset",
43
- "FR/ANNOTATED_DATA/MEDICAL/CAS1/annotated/full_dataset"],
44
- "IT": ["IT/ANNOTATED_DATA/Corte_Suprema_di_Cassazione/annotated"],
45
- "MT": ["MT/ANNOTATED_DATA/ADMINISTRATIVE/annotated/full_dataset",
46
- "MT/ANNOTATED_DATA/GENERAL_NEWS/News_1/annotated/full_dataset",
47
- "MT/ANNOTATED_DATA/LEGAL/Jurisprudence_1/annotated/full_dataset"],
48
- }
49
-
50
-
51
- def get_path(language):
52
- return base_path / language / "ANNOTATED_DATA/EUR_LEX/annotated/full_dataset"
53
-
54
-
55
- def get_coarse_grained_for_fine_grained(label):
56
- for coarse_grained, fine_grained_set in annotation_labels.items():
57
- if label in fine_grained_set:
58
- return coarse_grained
59
- return None # raise ValueError(f"Did not find fine_grained label {label}")
60
-
61
-
62
- def is_fine_grained(label):
63
- for coarse_grained, fine_grained_set in annotation_labels.items():
64
- if label.upper() in fine_grained_set:
65
- return True
66
- return False
67
-
68
-
69
- def is_coarse_grained(label):
70
- return label.upper() in annotation_labels.keys()
71
-
72
-
73
- class HashableAnnotation(Annotation):
74
- def __init__(self, annotation):
75
- super()
76
- self.label = annotation.label
77
- self.start = annotation.start
78
- self.stop = annotation.stop
79
- self.text = annotation.text
80
-
81
- def __eq__(self, other):
82
- return self.label == other.label and self.start == other.start and self.stop == other.stop and self.text == other.text
83
-
84
- def __hash__(self):
85
- return hash(('label', self.label, 'start', self.start, 'stop', self.stop, 'text', self.text))
86
-
87
-
88
- def get_token_annotations(token, annotations):
89
- annotations = list(dict.fromkeys([HashableAnnotation(ann) for ann in annotations])) # remove duplicate annotations
90
- coarse_grained = "O"
91
- fine_grained = "o"
92
- for annotation in annotations:
93
- label = annotation.label
94
- # if token.start == annotation.start and token.stop == annotation.stop: # fine_grained annotation
95
- if token.start >= annotation.start and token.stop <= annotation.stop: # course_grained annotation
96
- # we don't support multilabel annotations for each token for simplicity.
97
- # So when a token already has an annotation for either coarse or fine grained, we don't assign new ones.
98
- if coarse_grained == "O" and is_coarse_grained(label):
99
- coarse_grained = label
100
- elif fine_grained == "o" and is_fine_grained(label):
101
- # some DATE are mislabeled as day but it is hard to correct this. So we ignore it
102
- fine_grained = label
103
-
104
- return coarse_grained.upper(), fine_grained.upper()
105
-
106
-
107
- def generate_IOB_labelset(series, casing_function):
108
- last_ent = ""
109
- new_series = []
110
- for ent in series:
111
- if ent in ["o", "O"]:
112
- ent_to_add = ent
113
- else:
114
- if ent != last_ent: # we are the first one
115
- ent_to_add = "B-" + ent
116
- else:
117
- ent_to_add = "I-" + ent
118
- new_series.append(casing_function(ent_to_add))
119
- last_ent = ent
120
- return new_series
121
-
122
-
123
- def get_annotated_sentence(result_sentence, sentence):
124
- result_sentence["tokens"] = []
125
- result_sentence["coarse_grained"] = []
126
- result_sentence["fine_grained"] = []
127
- for k, token in enumerate(sentence.tokens):
128
- coarse_grained, fine_grained = get_token_annotations(token, sentence.annotations)
129
- token = token.text.replace(u'\xa0', u' ').strip() # replace non-breaking spaces
130
- if token: # remove empty tokens (only consisted of whitespace before
131
- result_sentence["tokens"].append(token)
132
- result_sentence["coarse_grained"].append(coarse_grained)
133
- result_sentence["fine_grained"].append(fine_grained)
134
- result_sentence["coarse_grained"] = generate_IOB_labelset(result_sentence["coarse_grained"], str.upper)
135
- result_sentence["fine_grained"] = generate_IOB_labelset(result_sentence["fine_grained"], str.upper)
136
- return result_sentence
137
-
138
-
139
- languages = sorted([Path(file).stem for file in glob(str(base_path / "*"))])
140
-
141
-
142
- def parse_files(language):
143
- data_path = get_path(language.upper())
144
- result_sentences = []
145
- not_parsable_files = 0
146
- file_names = sorted(list(glob(str(data_path / "*.tsv"))))
147
- for file in file_names:
148
- try:
149
- with open_web_anno_tsv(file) as f:
150
- for i, sentence in enumerate(f):
151
- result_sentence = {"language": language, "type": "EUR-LEX",
152
- "file_name": Path(file).stem, "sentence_number": i}
153
- result_sentence = get_annotated_sentence(result_sentence, sentence)
154
- result_sentences.append(result_sentence)
155
- print(f"Successfully parsed file {file}")
156
- except ReadException as e:
157
- print(f"Could not parse file {file}")
158
- not_parsable_files += 1
159
- print("Not parsable files: ", not_parsable_files)
160
- return pd.DataFrame(result_sentences), not_parsable_files
161
-
162
-
163
- stats = []
164
- train_dfs, validation_dfs, test_dfs = [], [], []
165
- for language in languages:
166
- language = language.lower()
167
- print(f"Parsing language {language}")
168
- df, not_parsable_files = parse_files(language)
169
- file_names = df.file_name.unique()
170
-
171
- # df.coarse_grained.apply(lambda x: print(set(x)))
172
-
173
- # split by file_name
174
- num_fn = len(file_names)
175
- train_fn, validation_fn, test_fn = np.split(np.array(file_names), [int(.8 * num_fn), int(.9 * num_fn)])
176
-
177
- lang_train = df[df.file_name.isin(train_fn)]
178
- lang_validation = df[df.file_name.isin(validation_fn)]
179
- lang_test = df[df.file_name.isin(test_fn)]
180
-
181
- train_dfs.append(lang_train)
182
- validation_dfs.append(lang_validation)
183
- test_dfs.append(lang_test)
184
-
185
- lang_stats = {"language": language}
186
-
187
- lang_stats["# train files"] = len(train_fn)
188
- lang_stats["# validation files"] = len(validation_fn)
189
- lang_stats["# test files"] = len(test_fn)
190
-
191
- lang_stats["# train sentences"] = len(lang_train.index)
192
- lang_stats["# validation sentences"] = len(lang_validation.index)
193
- lang_stats["# test sentences"] = len(lang_test.index)
194
-
195
- stats.append(lang_stats)
196
-
197
- stat_df = pd.DataFrame(stats)
198
- print(stat_df.to_markdown(index=False))
199
-
200
- train = pd.concat(train_dfs)
201
- validation = pd.concat(validation_dfs)
202
- test = pd.concat(test_dfs)
203
-
204
- df = pd.concat([train, validation, test])
205
- print(f"The final coarse grained tagset (in IOB notation) is the following: "
206
- f"`{list(df.coarse_grained.explode().unique())}`")
207
- print(f"The final fine grained tagset (in IOB notation) is the following: "
208
- f"`{list(df.fine_grained.explode().unique())}`")
209
-
210
-
211
- # save splits
212
- def save_splits_to_jsonl(config_name):
213
- # save to jsonl files for huggingface
214
- if config_name: os.makedirs(config_name, exist_ok=True)
215
- train.to_json(os.path.join(config_name, "train.jsonl"), lines=True, orient="records", force_ascii=False)
216
- validation.to_json(os.path.join(config_name, "validation.jsonl"), lines=True, orient="records", force_ascii=False)
217
- test.to_json(os.path.join(config_name, "test.jsonl"), lines=True, orient="records", force_ascii=False)
218
-
219
-
220
- save_splits_to_jsonl("")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
validation.jsonl → joelito--mapa/json-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a9b1a294688da296414c8042d4a18deb7a8f93006085f197074fe9a3a9b65abc
3
- size 2874796
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee64c067dd8cbcb368c18ff815c7e84ee55ed7738a02b5226c948342118111df
3
+ size 1247012
test.jsonl → joelito--mapa/json-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9498f67f080b1745b5b41feb32f66beb30e2bd2ca024306a910a7c59c3134d14
3
- size 7717849
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:233b1a35c7d1a9974d1d091ba824b16456a29e82ea80a9f8343b24f61f3da706
3
+ size 3313905
train.jsonl → joelito--mapa/json-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f2e8da9572e8bdfd699c998803d40b9680b8c6b92ea3356e8786d675cc5a19e4
3
- size 22116076
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f228b663a6355b2c2b5c01027e4a066b3b7e80572e8ed5e0c718156353efa125
3
+ size 475237