Datasets:

License:
stefan-it commited on
Commit
d42209f
1 Parent(s): f678ae6

dataset: add initial working version of GermEval 2014

Browse files
Files changed (1) hide show
  1. germeval2014.py +226 -0
germeval2014.py ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """GermEval 2014 NER Shared Task"""
17
+
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+ import flair
24
+
25
+
26
+ _CITATION = """\
27
+ @inproceedings{benikova14:_germev_named_entit_recog_shared_task,
28
+ added-at = {2017-04-03T19:29:52.000+0000},
29
+ address = {Hildesheim, Germany},
30
+ author = {Benikova, Darina and Biemann, Chris and Kisselew, Max and Pad\'o, Sebastian},
31
+ biburl = {https://puma.ub.uni-stuttgart.de/bibtex/2132d938a7afe8639e78156fb9d756b20/sp},
32
+ booktitle = {Proceedings of the KONVENS GermEval workshop},
33
+ interhash = {6cad5d4fdd6a07dbefad4221ba7d8d44},
34
+ intrahash = {132d938a7afe8639e78156fb9d756b20},
35
+ keywords = {myown workshop},
36
+ pages = {104--112},
37
+ timestamp = {2017-04-03T17:30:49.000+0000},
38
+ title = {{GermEval 2014 Named Entity Recognition Shared Task: Companion Paper}},
39
+ year = 2014
40
+ }
41
+ """
42
+
43
+ _LICENSE = """\
44
+ By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
45
+ """
46
+
47
+ _DESCRIPTION = """\
48
+ # Introduction
49
+
50
+ The GermEval 2014 NER Shared Task is an event that makes available CC-licensed German data with NER annotation with the
51
+ goal of significantly advancing the state of the art in German NER and to push the field of NER towards nested
52
+ representations of named entities.
53
+
54
+ The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation with the following
55
+ properties: The data was sampled from German Wikipedia and News Corpora as a collection of citations. The dataset
56
+ covers over 31,000 sentences corresponding to over 590,000 tokens. The NER annotation uses the NoSta-D guidelines,
57
+ which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating
58
+ embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]].
59
+
60
+ # Dataset
61
+
62
+ ## Labels
63
+
64
+ ### Fine-grained labels indicating NER subtypes
65
+
66
+ German morphology is comparatively productive (at least when compared to
67
+ English). There is a considerable amount of word formation through both overt (non-zero) derivation and compounding, in
68
+ particular for nouns. This gives rise to morphologically complex words that are not identical to, but stand in a direct
69
+ relation to, Named Entities. The Shared Task corpus treats these as NE instances but marks them as special subtypes by
70
+ introducing two fine-grained labels: -deriv marks derivations from NEs such as the previously mentioned englisch
71
+ (“English”), and -part marks compounds including a NE as a subsequence deutschlandweit (“Germany-wide”).
72
+
73
+ ### Embedded markables
74
+
75
+ Almost all extant corpora with Named Entity annotation assume that NE annotation is “flat”, that is, each word in the
76
+ text can form part of at most one NE chunk. Clearly, this is an oversimplification. Consider the noun phase Technische
77
+ Universitat Darmstadt ¨ (“Technical University (of) Darmstadt”). It denotes an organization (label ORG), but also holds
78
+ another NE, Darmstadt, which is a location (label LOC). To account for such cases, the Shared Task corpus is annotated
79
+ with two levels of Named Entities. It captures at least one level of smaller NEs being embedded in larger NEs.
80
+
81
+ ## Statistics
82
+
83
+ The data used for the GermEval 2014 NER Shared Task builds on the dataset annotated by (Benikova et al., 2014).
84
+ In
85
+ this dataset, sentences taken from German Wikipedia articles and online news were used as a collection of citations,
86
+ then annotated according to extended NoSta-D guidelines and eventually distributed under the CC-BY license. As already
87
+ described above, those guidelines use four main categories with sub-structure and nesting.
88
+
89
+ The dataset is distributed contains overall more than 31,000 sentences with over 590,000 tokens. Those were divided in
90
+ the following way: the training set consists of 24,000 sentences, the development set of 2,200 sentences and the test
91
+ set of 5,100 sentences. The test set labels were not available to the participants until after the deadline. The
92
+ distribution of the categories over the whole dataset is shown in Table 1. Care was taken to ensure the even dispersion
93
+ of the categories in the subsets. The entire dataset contains over 41,000 NEs, about 7.8% of them embedded in other NEs
94
+ (nested NEs), about 11.8% are derivations (deriv) and about 5.6% are parts of NEs concatenated with other words (part).
95
+
96
+ ## Format
97
+
98
+ The tab-separated format used in this dataset is similar to the CoNLL-Format. As illustrated in Table 2, the format
99
+ used in the dataset additionally contains token numbers per sentence in the first column and a comment line indicating
100
+ source and data before each sentence. The second column contains the tokens. The third column encodes the outer NE
101
+ spans, the fourth column the inner ones. The BIO-scheme was used in order to encode the NE spans. In our challenge,
102
+ further nested columns were not considered.
103
+
104
+ ## Summary
105
+
106
+ In summary, we distinguish between 12 classes of NEs: four main classes PERson, LOCation, ORGanisation, and OTHer and
107
+ their subclasses, annotated at two levels (“inner” and “outer” chunks). The challenge of this setup is that while it
108
+ technically still allows a simple classification approach it introduces a recursive structure that calls for the
109
+ application of more general machine learning or other automatically classifying methods that go beyond plain sequence
110
+ tagging.
111
+ """
112
+
113
+ _VERSION = "1.0.0"
114
+ _HOMEPAGE_URL = "https://sites.google.com/site/germeval2014ner/"
115
+
116
+
117
+ class GermEval2014Config(datasets.BuilderConfig):
118
+ """BuilderConfig for GermEval 2014."""
119
+
120
+ def __init__(self, **kwargs):
121
+ """BuilderConfig for GermEval 2014.
122
+ Args:
123
+ **kwargs: keyword arguments forwarded to super.
124
+ """
125
+ super(GermEval2014Config, self).__init__(**kwargs)
126
+
127
+ class GermEval2014(datasets.GeneratorBasedBuilder):
128
+ """GermEval 2014 NER Shared Task."""
129
+
130
+ BUILDER_CONFIGS = [
131
+ GermEval2014Config(
132
+ name="germeval2014", version=datasets.Version("1.0.0"), description="GermEval 2014 NER Shared Task "
133
+ ),
134
+ ]
135
+
136
+ def _info(self):
137
+ return datasets.DatasetInfo(
138
+ description=_DESCRIPTION,
139
+ features=datasets.Features(
140
+ {
141
+ "tokens": datasets.Sequence(datasets.Value("string")),
142
+ "ner_tags": datasets.Sequence(
143
+ datasets.features.ClassLabel(
144
+ names=[
145
+ "O",
146
+ "B-LOC",
147
+ "I-LOC",
148
+ "B-LOCderiv",
149
+ "I-LOCderiv",
150
+ "B-LOCpart",
151
+ "I-LOCpart",
152
+ "B-ORG",
153
+ "I-ORG",
154
+ "B-ORGderiv",
155
+ "I-ORGderiv",
156
+ "B-ORGpart",
157
+ "I-ORGpart",
158
+ "B-OTH",
159
+ "I-OTH",
160
+ "B-OTHderiv",
161
+ "I-OTHderiv",
162
+ "B-OTHpart",
163
+ "I-OTHpart",
164
+ "B-PER",
165
+ "I-PER",
166
+ "B-PERderiv",
167
+ "I-PERderiv",
168
+ "B-PERpart",
169
+ "I-PERpart",
170
+ ]
171
+ )
172
+ ),
173
+ "ner_t5_output": datasets.Value("string"),
174
+ "ner_own_output": datasets.Value("string"),
175
+ }
176
+ ),
177
+ supervised_keys=None,
178
+ license=_LICENSE,
179
+ homepage=_HOMEPAGE_URL,
180
+ citation=_CITATION,
181
+ )
182
+
183
+ def _split_generators(self, dl_manager):
184
+ import flair
185
+ from flair.datasets import NER_GERMAN_GERMEVAL
186
+ corpus = NER_GERMAN_GERMEVAL()
187
+
188
+ return [
189
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"sentences": corpus.train}),
190
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"sentences": corpus.dev}),
191
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"sentences": corpus.test}),
192
+ ]
193
+
194
+ def _generate_examples(self, sentences):
195
+ for counter, sentence in enumerate(sentences):
196
+ original_spans = sentence.get_spans("ner")
197
+ original_tokens = []
198
+ original_tags = []
199
+
200
+ t5_spans = []
201
+ own_spans = []
202
+
203
+ for index, token in enumerate(sentence.tokens):
204
+ original_tag = "O"
205
+ for span in original_spans:
206
+ if token in span:
207
+ original_tag = "B-" + span.tag if token == span[0] else "I-" + span.tag
208
+
209
+ span_text = " ".join(token.text for token in span.tokens)
210
+ t5_span = f"{span.tag} : {span_text}"
211
+ own_span = f"{span.tag} = {span_text}"
212
+ t5_spans.append(t5_span)
213
+ own_spans.append(own_span)
214
+
215
+ original_tokens.append(sentence[index].text)
216
+ original_tags.append(original_tag)
217
+
218
+ ner_t5_output = " || ".join(t5_spans)
219
+ ner_own_output = " || ".join(own_spans)
220
+
221
+ yield counter, {
222
+ "tokens": original_tokens,
223
+ "ner_tags": original_tags,
224
+ "ner_t5_output": ner_t5_output,
225
+ "ner_own_output": ner_own_output,
226
+ }