ArneBinder commited on
Commit
1015ee3
1 Parent(s): 2ca8c28

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +261 -0
  2. aae2.py +182 -0
  3. requirements.txt +2 -0
README.md ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PIE Dataset Card for "aae2"
2
+
3
+ This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the Argument Annotated Essays v2 (AAE2) dataset ([paper](https://aclanthology.org/J17-3005.pdf) and [homepage](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2422)). Since the AAE2 dataset is published in the [BRAT standoff format](https://brat.nlplab.org/standoff.html), this dataset builder is based on the [PyTorch-IE brat dataset loading script](https://huggingface.co/datasets/pie/brat).
4
+
5
+ Therefore, the `aae2` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
6
+
7
+ ### Dataset Summary
8
+
9
+ Argument Annotated Essays Corpus (AAEC) ([Stab and Gurevych, 2017](https://aclanthology.org/J17-3005.pdf)) contains student essays. A stance for a controversial theme is expressed by a major claim component as well as claim components, and premise components justify or refute the claims. Attack and support labels are defined as relations. The span covers a statement, *which can stand in isolation as a complete sentence*, according to the AAEC annotation guidelines. All components are annotated with minimum boundaries of a clause or sentence excluding so-called "shell" language such as *On the other hand* and *Hence*. (Morio et al., 2022, p. 642)
10
+
11
+ There is no premise that links to another premise or claim in a different paragraph. That means, an argumentation tree structure is complete within each paragraph. Therefore, it is possible to train a model on the full documents or just at the paragraph-level which is usually less memory-exhaustive (Eger et al., 2017, p. 16).
12
+
13
+ ### Supported Tasks and Leaderboards
14
+
15
+ - **Tasks**: Argumentation Mining, Component Identification, Component Classification, Structure Identification
16
+ - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
17
+
18
+ ### Languages
19
+
20
+ The language in the dataset is English (persuasive essays).
21
+
22
+ ### Dataset Variants
23
+
24
+ The `aae2` dataset comes in a single version (`default`) with `BratDocumentWithMergedSpans` as document type. Note, that this in contrast to the base brat dataset, where the document type for the `default` variant is `BratDocument`. The reason is that the AAE2 dataset has already been published with only single-fragment spans. Without any need to merge fragments, the document type `BratDocumentWithMergedSpans` is easier to handle for most of the task modules.
25
+
26
+ ### Data Schema
27
+
28
+ See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
29
+
30
+ ### Usage
31
+
32
+ ```python
33
+ from pie_datasets import load_dataset, builders
34
+
35
+ # load default version
36
+ datasets = load_dataset("pie/aae2")
37
+ doc = datasets["train"][0]
38
+ assert isinstance(doc, builders.brat.BratDocumentWithMergedSpans)
39
+ ```
40
+
41
+ ### Data Splits
42
+
43
+ | Statistics | Train | Test |
44
+ | ---------------------------------------------------------------- | -------------------------: | -----------------------: |
45
+ | No. of document | 322 | 80 |
46
+ | Components <br/>- `MajorClaim`<br/>- `Claim`<br/>- `Premise` | <br/>598<br/>1202<br/>3023 | <br/>153<br/>304<br/>809 |
47
+ | Relations\*<br/>- `supports`<br/>- `attacks` | <br/>3820<br/>405 | <br/>1021<br/>92 |
48
+
49
+ \* included all relations between claims and premises and all claim attributions.
50
+
51
+ See further statistics in Stab & Gurevych (2017), p. 650, Table A.1.
52
+
53
+ ### Label Descriptions
54
+
55
+ #### Components
56
+
57
+ | Components | Count | Percentage |
58
+ | ------------ | ----: | ---------: |
59
+ | `MajorClaim` | 751 | 12.3 % |
60
+ | `Claim` | 1506 | 24.7 % |
61
+ | `Premise` | 3832 | 62.9 % |
62
+
63
+ - `MajorClaim` is the root node of the argumentation structure and represents the author’s standpoint on the topic. Essay bodies either support or attack the author’s standpoint expressed in the major claim. The major claim can be mentioned multiple times in a single document.
64
+ - `Claim` constitutes the central component of each argument. Each one has at least one premise and takes stance attribute values "for" or "against" with regarding the major claim.
65
+ - `Premise` is the reasons of the argument; either linked to claim or another premise.
66
+
67
+ **Note that** relations between `MajorClaim` and `Claim` were not annotated; however, each claim is annotated with an `Attribute` annotation with value `for` or `against` - which indicates the relation between itself and `MajorClaim`. In addition, when two non-related `Claim` 's appear in one paragraph, there is also no relations to one another.
68
+
69
+ #### Relations
70
+
71
+ | Relations | Count | Percentage |
72
+ | ------------------- | ----: | ---------: |
73
+ | support: `supports` | 3613 | 94.3 % |
74
+ | attack: `attacks` | 219 | 5.7 % |
75
+
76
+ - "Each premise `p` has one **outgoing relation** (i.e., there is a relation that has p as source component) and none or several **incoming relations** (i.e., there can be a relation with `p` as target component)."
77
+ - "A `Claim` can exhibit several **incoming relations** but no **outgoing relation**." (S&G, 2017, p. 68)
78
+ - "The relations from the claims of the arguments to the major claim are dotted since we will not explicitly annotated them. The relation of each argument to the major claim is indicated by a stance attribute of each claim. This attribute can either be for or against as illustrated in figure 1.4." (Stab & Gurevych, *Guidelines for Annotating Argumentation Structures in Persuasive Essays*, 2015, p. 5)
79
+
80
+ See further description in Stab & Gurevych 2017, p.627 and the [annotation guideline](https://github.com/ArneBinder/pie-datasets/blob/db94035602610cefca2b1678aa2fe4455c96155d/data/datasets/ArgumentAnnotatedEssays-2.0/guideline.pdf).
81
+
82
+ ### Document Converters
83
+
84
+ The dataset provides document converters for the following target document types:
85
+
86
+ - `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations` with layers:
87
+ - `labeled_spans`: `LabeledSpan` annotations, converted from `BratDocumentWithMergedSpans`'s `spans`
88
+ - labels: `MajorClaim`, `Claim`, `Premise`
89
+ - `binary_relations`: `BinaryRelation` annotations, converted from `BratDocumentWithMergedSpans`'s `relations`
90
+ - there are two conversion methods that convert `Claim` attributes to their relations to `MajorClaim` (also see the label-count changes after this relation conversion [here below](#label-counts-after-document-converter)):
91
+ - `connect_first` (default setting):
92
+ - build a `supports` or `attacks` relation from each `Claim` to the first `MajorClaim` depending on the `Claim`'s attribute (`for` or `against`), and
93
+ - build a `semantically_same` relation between following `MajorClaim` to the first `MajorClaim`
94
+ - `connect_all`
95
+ - build a `supports` or `attacks` relation from each `Claim` to every `MajorClaim`
96
+ - no relations between each `MajorClaim`
97
+ - labels: `supports`, `attacks`, and `semantically_same` if `connect_first`
98
+ - `pytorch_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions` with layers:
99
+ - `labeled_spans`, as above
100
+ - `binary_relations`, as above
101
+ - `labeled_partitions`, `LabeledSpan` annotations, created from splitting `BratDocumentWithMergedSpans`'s `text` at new lines (`\n`).
102
+ - every partition is labeled as `paragraph`
103
+
104
+ See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
105
+ definitions.
106
+
107
+ #### Label Statistics after Document Conversion
108
+
109
+ When converting from `BratDocumentWithMergedSpan` to `TextDocumentWithLabeledSpansAndBinaryRelations` and `TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions`,
110
+ we apply a relation-conversion method (see above) that changes the label counts for the relations, as follows:
111
+
112
+ 1. `connect_first` (default):
113
+
114
+ | Relations | Count | Percentage |
115
+ | -------------------------- | ----: | ---------: |
116
+ | support: `supports` | 4841 | 85.1 % |
117
+ | attack: `attacks` | 497 | 8.7 % |
118
+ | other: `semantically_same` | 349 | 6.2 % |
119
+
120
+ 2. `connect_all`
121
+
122
+ | Relations | Count | Percentage |
123
+ | ------------------- | ----: | ---------: |
124
+ | support: `supports` | 5958 | 89.3 % |
125
+ | attack: `attacks` | 715 | 10.7 % |
126
+
127
+ ## Dataset Creation
128
+
129
+ ### Curation Rationale
130
+
131
+ "The identification of argumentation structures involves several subtasks like separating argumentative from non-argumentative text units (Moens et al. 2007; Florou
132
+ et al. 2013), classifying argument components into claims and premises (Mochales-Palau and Moens 2011; Rooney, Wang, and Browne 2012; Stab and Gurevych 2014b),
133
+ and identifying argumentative relations (Mochales-Palau and Moens 2009; Peldszus
134
+ 2014; Stab and Gurevych 2014b). However, an approach that covers all subtasks is still
135
+ missing. However, an approach that covers all subtasks is still
136
+ missing. Furthermore, most approaches operate locally and do not optimize the global
137
+ argumentation structure.
138
+
139
+ "In addition,
140
+ to the lack of end-to-end approaches for parsing argumentation structures, there are
141
+ relatively few corpora annotated with argumentation structures at the discourse-level." (p. 621)
142
+
143
+ "Our primary motivation for this work is to create argument analysis methods
144
+ for argumentative writing support systems and to achieve a better understanding
145
+ of argumentation structures." (p. 622)
146
+
147
+ ### Source Data
148
+
149
+ Persuasive essays were collected from [essayforum.com](https://essayforum.com/) (See essay prompts, along with the essay's `id`'s [here](https://github.com/ArneBinder/pie-datasets/blob/db94035602610cefca2b1678aa2fe4455c96155d/data/datasets/ArgumentAnnotatedEssays-2.0/prompts.csv)).
150
+
151
+ #### Initial Data Collection and Normalization
152
+
153
+ "We randomly selected 402 English essays with a description of the writing prompt from
154
+ essayforum.com. This online forum is an active community that provides correction and
155
+ feedback about different texts such as research papers, essays, or poetry. For example,
156
+ students post their essays in order to receive feedback about their writing skills while
157
+ preparing for standardized language tests. The corpus includes 7,116 sentences with
158
+ 147,271 tokens." (p. 630)
159
+
160
+ #### Who are the source language producers?
161
+
162
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
+
164
+ ### Annotations
165
+
166
+ #### Annotation process
167
+
168
+ The annotation were done using BRAT Rapid Annotation Tool ([Stenetorp et al., 2012](https://aclanthology.org/E12-2021/)).
169
+
170
+ All three annotators independently annotated a random subset of 80 essays. The
171
+ remaining 322 essays were annotated by the expert annotator.
172
+
173
+ The authors evaluated the inter-annotator agreement using observed agreement and Fleiss’ κ (Fleiss 1971), on each label on each sub-tasks,
174
+ namely, component identification, component classification, and relation identification.
175
+ The results were reported in their [paper](https://aclanthology.org/J17-3005.pdf) in Tables 2-4.
176
+
177
+ #### Who are the annotators?
178
+
179
+ Three non-native speakers; one of the three being an expert annotator.
180
+
181
+ ### Personal and Sensitive Information
182
+
183
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
+
185
+ ## Considerations for Using the Data
186
+
187
+ ### Social Impact of Dataset
188
+
189
+ "\[Computational Argumentation\] have
190
+ broad application potential in various areas such as legal decision support (Mochales-Palau and Moens 2009), information retrieval (Carstens and Toni 2015), policy making (Sardianos et al. 2015), and debating technologies (Levy et al. 2014; Rinott et al.
191
+ 2015)." (p. 619)
192
+
193
+ ### Discussion of Biases
194
+
195
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
196
+
197
+ ### Other Known Limitations
198
+
199
+ The relations between claims and major claims are not explicitly annotated.
200
+
201
+ "The proportion of non-argumentative text amounts to 47,474 tokens (32.2%) and
202
+ 1,631 sentences (22.9%). The number of sentences with several argument components
203
+ is 583, of which 302 include several components with different types (e.g., a claim followed by premise)...
204
+ \[T\]he identification of argument components requires the
205
+ separation of argumentative from non-argumentative text units and the recognition of
206
+ component boundaries at the token level...The proportion of paragraphs with unlinked
207
+ argument components (e.g., unsupported claims without incoming relations) is 421
208
+ (23%). Thus, methods that link all argument components in a paragraph are only of
209
+ limited use for identifying the argumentation structures in our corpus.
210
+
211
+ "Most of the arguments are convergent—that is, the depth of the
212
+ argument is 1. The number of arguments with serial structure is 236 (20.9%)." (p. 634)
213
+
214
+ ## Additional Information
215
+
216
+ ### Dataset Curators
217
+
218
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
+
220
+ ### Licensing Information
221
+
222
+ **License**: [License description by TU Darmstadt](https://tudatalib.ulb.tu-darmstadt.de/bitstream/handle/tudatalib/2422/arg_annotated_essays_v2_license.pdf?sequence=2&isAllowed=y)
223
+
224
+ **Funding**: This work has been supported by the
225
+ Volkswagen Foundation as part of the
226
+ Lichtenberg-Professorship Program under
227
+ grant no. I/82806 and by the German Federal
228
+ Ministry of Education and Research (BMBF)
229
+ as a part of the Software Campus project
230
+ AWS under grant no. 01—S12054.
231
+
232
+ ### Citation Information
233
+
234
+ ```
235
+ @article{stab2017parsing,
236
+ title={Parsing argumentation structures in persuasive essays},
237
+ author={Stab, Christian and Gurevych, Iryna},
238
+ journal={Computational Linguistics},
239
+ volume={43},
240
+ number={3},
241
+ pages={619--659},
242
+ year={2017},
243
+ publisher={MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info~…}
244
+ }
245
+ ```
246
+
247
+ ```
248
+ @misc{https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2422,
249
+ url = { https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2422 },
250
+ author = { Stab, Christian and Gurevych, Iryna },
251
+ keywords = { Argument Mining, 409-06 Informationssysteme, Prozess- und Wissensmanagement, 004 },
252
+ publisher = { Technical University of Darmstadt },
253
+ year = { 2017 },
254
+ copyright = { License description },
255
+ title = { Argument Annotated Essays (version 2) }
256
+ }
257
+ ```
258
+
259
+ ### Contributions
260
+
261
+ Thanks to [@ArneBinder](https://github.com/ArneBinder) and [@idalr](https://github.com/idalr) for adding this dataset.
aae2.py ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import Dict
3
+
4
+ import pandas as pd
5
+ from pie_modules.document.processing import RegexPartitioner
6
+ from pytorch_ie.annotations import BinaryRelation
7
+ from pytorch_ie.documents import (
8
+ TextDocumentWithLabeledSpansAndBinaryRelations,
9
+ TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions,
10
+ )
11
+
12
+ from pie_datasets.builders import BratBuilder
13
+ from pie_datasets.builders.brat import BratConfig, BratDocumentWithMergedSpans
14
+ from pie_datasets.core.dataset import DocumentConvertersType
15
+ from pie_datasets.document.processing import Caster, Converter, Pipeline
16
+
17
+
18
+ def get_split_paths(url_split_ids: str, subdirectory: str) -> Dict[str, str]:
19
+ df_splits = pd.read_csv(url_split_ids, sep=";")
20
+ splits2ids = df_splits.groupby(df_splits["SET"]).agg(list).to_dict()["ID"]
21
+ return {
22
+ split.lower(): [os.path.join(subdirectory, split_id) for split_id in split_ids]
23
+ for split, split_ids in splits2ids.items()
24
+ }
25
+
26
+
27
+ URL = "https://github.com/ArneBinder/pie-datasets/raw/83fb46f904b13f335b6da3cce2fc7004d802ce4e/data/datasets/ArgumentAnnotatedEssays-2.0/brat-project-final.zip"
28
+ URL_SPLIT_IDS = "https://raw.githubusercontent.com/ArneBinder/pie-datasets/83fb46f904b13f335b6da3cce2fc7004d802ce4e/data/datasets/ArgumentAnnotatedEssays-2.0/train-test-split.csv"
29
+ SPLIT_PATHS = get_split_paths(URL_SPLIT_IDS, subdirectory="brat-project-final")
30
+
31
+ DEFAULT_ATTRIBUTIONS_TO_RELATIONS_DICT = {"For": "supports", "Against": "attacks"}
32
+
33
+
34
+ def convert_aae2_claim_attributions_to_relations(
35
+ document: BratDocumentWithMergedSpans,
36
+ method: str,
37
+ attributions_to_relations_mapping: Dict[str, str] = DEFAULT_ATTRIBUTIONS_TO_RELATIONS_DICT,
38
+ major_claim_label: str = "MajorClaim",
39
+ claim_label: str = "Claim",
40
+ semantically_same_label: str = "semantically_same",
41
+ ) -> TextDocumentWithLabeledSpansAndBinaryRelations:
42
+ """This function collects the attributions of Claims from BratDocumentWithMergedSpans, and
43
+ build new relations between MajorClaims and Claims based on these attributions in the following
44
+ way:
45
+ 1) "connect_first":
46
+ Each Claim points to the first MajorClaim,
47
+ and the other MajorClaim(s) is labeled as semantically same as the first MajorClaim.
48
+ The number of new relations created are: NoOfMajorClaim - 1 + NoOfClaim.
49
+ 2) "connect_all":
50
+ Each Claim points to every MajorClaim; creating many-to-many relations.
51
+ The number of new relations created are: NoOfMajorClaim x NoOfClaim.
52
+
53
+ The attributions are transformed into the relation labels as listed in
54
+ DEFAULT_ATTRIBUTIONS_TO_RELATIONS_DICT dictionary.
55
+ """
56
+ document = document.copy()
57
+ new_document = TextDocumentWithLabeledSpansAndBinaryRelations(
58
+ text=document.text, id=document.id, metadata=document.metadata
59
+ )
60
+ # import from document
61
+ spans = document.spans.clear()
62
+ new_document.labeled_spans.extend(spans)
63
+ relations = document.relations.clear()
64
+ new_document.binary_relations.extend(relations)
65
+
66
+ claim_attributes = [
67
+ attribute
68
+ for attribute in document.span_attributes
69
+ if attribute.annotation.label == claim_label
70
+ ]
71
+
72
+ # get all MajorClaims
73
+ # sorted by start position to ensure the first MajorClaim is really the first one that occurs in the text
74
+ major_claims = sorted(
75
+ [mc for mc in new_document.labeled_spans if mc.label == major_claim_label],
76
+ key=lambda span: span.start,
77
+ )
78
+
79
+ if method == "connect_first":
80
+ if len(major_claims) > 0:
81
+ first_major_claim = major_claims.pop(0)
82
+
83
+ # Add relation between Claims and first MajorClaim
84
+ for claim_attribute in claim_attributes:
85
+ new_relation = BinaryRelation(
86
+ head=claim_attribute.annotation,
87
+ tail=first_major_claim,
88
+ label=attributions_to_relations_mapping[claim_attribute.value],
89
+ )
90
+ new_document.binary_relations.append(new_relation)
91
+
92
+ # Add relations between MajorClaims
93
+ for majorclaim in major_claims:
94
+ new_relation = BinaryRelation(
95
+ head=majorclaim,
96
+ tail=first_major_claim,
97
+ label=semantically_same_label,
98
+ )
99
+ new_document.binary_relations.append(new_relation)
100
+
101
+ elif method == "connect_all":
102
+ for major_claim in major_claims:
103
+ for claim_attribute in claim_attributes:
104
+ new_relation = BinaryRelation(
105
+ head=claim_attribute.annotation,
106
+ tail=major_claim,
107
+ label=attributions_to_relations_mapping[claim_attribute.value],
108
+ )
109
+ new_document.binary_relations.append(new_relation)
110
+
111
+ else:
112
+ raise ValueError(f"unknown method: {method}")
113
+
114
+ return new_document
115
+
116
+
117
+ def get_common_pipeline_steps(conversion_method: str) -> dict:
118
+ return dict(
119
+ convert=Converter(
120
+ function=convert_aae2_claim_attributions_to_relations,
121
+ method=conversion_method,
122
+ ),
123
+ )
124
+
125
+
126
+ class ArgumentAnnotatedEssaysV2Config(BratConfig):
127
+ def __init__(self, conversion_method: str, **kwargs):
128
+ """BuilderConfig for ArgumentAnnotatedEssaysV2.
129
+
130
+ Args:
131
+ conversion_method: either "connect_first" or "connect_all", see convert_aae2_claim_attributions_to_relations
132
+ **kwargs: keyword arguments forwarded to super.
133
+ """
134
+ super().__init__(**kwargs)
135
+ self.conversion_method = conversion_method
136
+
137
+
138
+ class ArgumentAnnotatedEssaysV2(BratBuilder):
139
+ BASE_DATASET_PATH = "DFKI-SLT/brat"
140
+ BASE_DATASET_REVISION = "bb8c37d84ddf2da1e691d226c55fef48fd8149b5"
141
+
142
+ # we need to add None to the list of dataset variants to support the default dataset variant
143
+ BASE_BUILDER_KWARGS_DICT = {
144
+ dataset_variant: {"url": URL, "split_paths": SPLIT_PATHS}
145
+ for dataset_variant in [BratBuilder.DEFAULT_CONFIG_NAME, None]
146
+ }
147
+
148
+ BUILDER_CONFIGS = [
149
+ ArgumentAnnotatedEssaysV2Config(
150
+ name=BratBuilder.DEFAULT_CONFIG_NAME,
151
+ merge_fragmented_spans=True,
152
+ conversion_method="connect_first",
153
+ ),
154
+ ]
155
+
156
+ DOCUMENT_TYPES = {
157
+ BratBuilder.DEFAULT_CONFIG_NAME: BratDocumentWithMergedSpans,
158
+ }
159
+
160
+ @property
161
+ def document_converters(self) -> DocumentConvertersType:
162
+ if self.config.name == "default" or None:
163
+ return {
164
+ TextDocumentWithLabeledSpansAndBinaryRelations: Pipeline(
165
+ **get_common_pipeline_steps(conversion_method=self.config.conversion_method)
166
+ ),
167
+ TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions: Pipeline(
168
+ **get_common_pipeline_steps(conversion_method=self.config.conversion_method),
169
+ cast=Caster(
170
+ document_type=TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
171
+ ),
172
+ add_partitions=RegexPartitioner(
173
+ partition_layer_name="labeled_partitions",
174
+ default_partition_label="paragraph",
175
+ pattern="\n",
176
+ strip_whitespace=True,
177
+ verbose=False,
178
+ ),
179
+ ),
180
+ }
181
+ else:
182
+ raise ValueError(f"Unknown dataset variant: {self.config.name}")
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ pie-datasets>=0.8.0,<0.9.0
2
+ pie-modules>=0.8.3,<0.9.0