ArneBinder
commited on
https://github.com/ArneBinder/pie-datasets/pull/100
Browse files- README.md +166 -16
- img/rtd-label_sciarg.png +3 -0
- img/slt_sciarg.png +3 -0
- img/tl_sciarg.png +3 -0
- requirements.txt +2 -2
README.md
CHANGED
@@ -4,6 +4,37 @@ This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for t
|
|
4 |
|
5 |
Therefore, the `sciarg` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
### Dataset Summary
|
8 |
|
9 |
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., [2015](https://aclanthology.org/W15-1605.pdf), [2016](https://aclanthology.org/L16-1492.pdf)) with an annotation layer containing
|
@@ -39,21 +70,25 @@ are connected via the `parts_of_same` relations are converted to `LabeledMultiSp
|
|
39 |
|
40 |
See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
|
41 |
|
42 |
-
###
|
43 |
|
44 |
-
|
45 |
-
from pie_datasets import load_dataset, builders
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
doc = datasets["train"][0]
|
55 |
-
assert isinstance(doc, builders.brat.BratDocument)
|
56 |
-
```
|
57 |
|
58 |
### Data Splits
|
59 |
|
@@ -133,6 +168,13 @@ possibly since [Lauscher et al., 2018](https://aclanthology.org/W18-5206/) prese
|
|
133 |
|
134 |
(*Annotation Guidelines*, pp. 4-6)
|
135 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
136 |
#### Examples
|
137 |
|
138 |
![sample1](img/leaannof3.png)
|
@@ -143,9 +185,14 @@ Below: Subset of relations in `A01`
|
|
143 |
|
144 |
![sample2](img/sciarg-sam.png)
|
145 |
|
146 |
-
### Document
|
147 |
|
148 |
-
|
|
|
|
|
|
|
|
|
|
|
149 |
|
150 |
From `default` version:
|
151 |
|
@@ -178,8 +225,111 @@ From `resolve_parts_of_same` version:
|
|
178 |
- `labeled_partitions`, `LabeledSpan` annotations, created from splitting `BratDocument`'s `text` at new paragraph in `xml` format.
|
179 |
- labels: `title`, `abstract`, `H1`
|
180 |
|
181 |
-
|
182 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
183 |
|
184 |
## Dataset Creation
|
185 |
|
|
|
4 |
|
5 |
Therefore, the `sciarg` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
|
6 |
|
7 |
+
### Usage
|
8 |
+
|
9 |
+
```python
|
10 |
+
from pie_datasets import load_dataset
|
11 |
+
from pie_datasets.builders.brat import BratDocumentWithMergedSpans, BratDocument
|
12 |
+
from pytorch_ie.documents import TextDocumentWithLabeledMultiSpansBinaryRelationsAndLabeledPartitions, TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
|
13 |
+
|
14 |
+
# load default version
|
15 |
+
dataset = load_dataset("pie/sciarg")
|
16 |
+
assert isinstance(dataset["train"][0], BratDocumentWithMergedSpans)
|
17 |
+
|
18 |
+
# if required, normalize the document type (see section Document Converters below)
|
19 |
+
dataset_converted = dataset.to_document_type(TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions)
|
20 |
+
assert isinstance(dataset_converted["train"][0], TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions)
|
21 |
+
|
22 |
+
# load version with resolved parts_of_same relations
|
23 |
+
dataset = load_dataset("pie/sciarg", name='resolve_parts_of_same')
|
24 |
+
assert isinstance(dataset["train"][0], BratDocument)
|
25 |
+
|
26 |
+
# if required, normalize the document type (see section Document Converters below)
|
27 |
+
dataset_converted = dataset.to_document_type(TextDocumentWithLabeledMultiSpansBinaryRelationsAndLabeledPartitions)
|
28 |
+
assert isinstance(dataset_converted["train"][0], TextDocumentWithLabeledMultiSpansBinaryRelationsAndLabeledPartitions)
|
29 |
+
|
30 |
+
# get first relation in the first document
|
31 |
+
doc = dataset_converted["train"][0]
|
32 |
+
print(doc.binary_relations[0])
|
33 |
+
# BinaryRelation(head=LabeledMultiSpan(slices=((15071, 15076),), label='data', score=1.0), tail=LabeledMultiSpan(slices=((14983, 15062),), label='background_claim', score=1.0), label='supports', score=1.0)
|
34 |
+
print(doc.binary_relations[0].resolve())
|
35 |
+
# ('supports', (('data', ('[ 3 ]',)), ('background_claim', ('PSD and improved example-based schemes have been discussed in many publications',))))
|
36 |
+
```
|
37 |
+
|
38 |
### Dataset Summary
|
39 |
|
40 |
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., [2015](https://aclanthology.org/W15-1605.pdf), [2016](https://aclanthology.org/L16-1492.pdf)) with an annotation layer containing
|
|
|
70 |
|
71 |
See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
|
72 |
|
73 |
+
### Document Converters
|
74 |
|
75 |
+
The dataset provides document converters for the following target document types:
|
|
|
76 |
|
77 |
+
- `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
|
78 |
+
- `LabeledSpans`, converted from `BratDocument`'s `spans`
|
79 |
+
- labels: `background_claim`, `own_claim`, `data`
|
80 |
+
- if `spans` contain whitespace at the beginning and/or the end, the whitespace are trimmed out.
|
81 |
+
- `BinraryRelations`, converted from `BratDocument`'s `relations`
|
82 |
+
- labels: `supports`, `contradicts`, `semantically_same`, `parts_of_same`
|
83 |
+
- if the `relations` label is `semantically_same` or `parts_of_same`, they are merged if they are the same arguments after sorting.
|
84 |
+
- `pytorch_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions`
|
85 |
+
- `LabeledSpans`, as above
|
86 |
+
- `BinaryRelations`, as above
|
87 |
+
- `LabeledPartitions`, partitioned `BratDocument`'s `text`, according to the paragraph, using regex.
|
88 |
+
- labels: `title`, `abstract`, `H1`
|
89 |
|
90 |
+
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
|
91 |
+
definitions.
|
|
|
|
|
|
|
92 |
|
93 |
### Data Splits
|
94 |
|
|
|
168 |
|
169 |
(*Annotation Guidelines*, pp. 4-6)
|
170 |
|
171 |
+
There are currently discrepancies in label counts between
|
172 |
+
|
173 |
+
- previous report in [Lauscher et al., 2018](https://aclanthology.org/W18-5206/), p. 43),
|
174 |
+
- current report above here (labels counted in `BratDocument`'s);
|
175 |
+
|
176 |
+
possibly since [Lauscher et al., 2018](https://aclanthology.org/W18-5206/) presents the numbers of the real argumentative components, whereas here discontinuous components are still split (marked with the `parts_of_same` helper relation) and, thus, count per fragment.
|
177 |
+
|
178 |
#### Examples
|
179 |
|
180 |
![sample1](img/leaannof3.png)
|
|
|
185 |
|
186 |
![sample2](img/sciarg-sam.png)
|
187 |
|
188 |
+
### Collected Statistics after Document Conversion
|
189 |
|
190 |
+
We use the script `evaluate_documents.py` from [PyTorch-IE-Hydra-Template](https://github.com/ArneBinder/pytorch-ie-hydra-template-1) to generate these statistics.
|
191 |
+
After checking out that code, the statistics and plots can be generated by the command:
|
192 |
+
|
193 |
+
```commandline
|
194 |
+
python src/evaluate_documents.py dataset=sciarg_base metric=METRIC
|
195 |
+
```
|
196 |
|
197 |
From `default` version:
|
198 |
|
|
|
225 |
- `labeled_partitions`, `LabeledSpan` annotations, created from splitting `BratDocument`'s `text` at new paragraph in `xml` format.
|
226 |
- labels: `title`, `abstract`, `H1`
|
227 |
|
228 |
+
This also requires to have the following dataset config in `configs/dataset/sciarg_base.yaml` of this dataset within the repo directory:
|
229 |
+
|
230 |
+
```commandline
|
231 |
+
_target_: src.utils.execute_pipeline
|
232 |
+
input:
|
233 |
+
_target_: pie_datasets.DatasetDict.load_dataset
|
234 |
+
path: pie/sciarg
|
235 |
+
revision: 982d5682ba414ee13cf92cb93ec18fc8e78e2b81
|
236 |
+
```
|
237 |
+
|
238 |
+
For token based metrics, this uses `bert-base-uncased` from `transformer.AutoTokenizer` (see [AutoTokenizer](https://huggingface.co/docs/transformers/v4.37.1/en/model_doc/auto#transformers.AutoTokenizer), and [bert-based-uncased](https://huggingface.co/bert-base-uncased) to tokenize `text` in `TextDocumentWithLabeledSpansAndBinaryRelations` (see [document type](https://github.com/ArneBinder/pie-modules/blob/main/src/pie_modules/documents.py)).
|
239 |
+
|
240 |
+
#### Relation argument (outer) token distance per label
|
241 |
+
|
242 |
+
The distance is measured from the first token of the first argumentative unit to the last token of the last unit, a.k.a. outer distance.
|
243 |
+
|
244 |
+
We collect the following statistics: number of documents in the split (*no. doc*), no. of relations (*len*), mean of token distance (*mean*), standard deviation of the distance (*std*), minimum outer distance (*min*), and maximum outer distance (*max*).
|
245 |
+
We also present histograms in the collapsible, showing the distribution of these relation distances (x-axis; and unit-counts in y-axis), accordingly.
|
246 |
+
|
247 |
+
<details>
|
248 |
+
<summary>Command</summary>
|
249 |
+
|
250 |
+
```
|
251 |
+
python src/evaluate_documents.py dataset=sciarg_base metric=relation_argument_token_distances
|
252 |
+
```
|
253 |
+
|
254 |
+
</details>
|
255 |
+
|
256 |
+
| | len | max | mean | min | std |
|
257 |
+
| :---------------- | ----: | ---: | ------: | --: | ------: |
|
258 |
+
| ALL | 15640 | 2864 | 30.524 | 3 | 45.351 |
|
259 |
+
| contradicts | 1392 | 238 | 32.565 | 6 | 19.771 |
|
260 |
+
| parts_of_same | 2594 | 374 | 28.18 | 3 | 26.845 |
|
261 |
+
| semantically_same | 84 | 2864 | 206.333 | 11 | 492.268 |
|
262 |
+
| supports | 11570 | 407 | 29.527 | 4 | 24.189 |
|
263 |
+
|
264 |
+
<details>
|
265 |
+
<summary>Histogram (split: train, 40 documents)</summary>
|
266 |
+
|
267 |
+
![rtd-label_sciarg.png](img%2Frtd-label_sciarg.png)
|
268 |
+
|
269 |
+
</details>
|
270 |
+
|
271 |
+
#### Span lengths (tokens)
|
272 |
+
|
273 |
+
The span length is measured from the first token of the first argumentative unit to the last token of the particular unit.
|
274 |
+
|
275 |
+
We collect the following statistics: number of documents in the split (*no. doc*), no. of spans (*len*), mean of number of tokens in a span (*mean*), standard deviation of the number of tokens (*std*), minimum tokens in a span (*min*), and maximum tokens in a span (*max*).
|
276 |
+
We also present histograms in the collapsible, showing the distribution of these token-numbers (x-axis; and unit-counts in y-axis), accordingly.
|
277 |
+
|
278 |
+
<details>
|
279 |
+
<summary>Command</summary>
|
280 |
+
|
281 |
+
```
|
282 |
+
python src/evaluate_documents.py dataset=sciarg_base metric=span_lengths_tokens
|
283 |
+
```
|
284 |
+
|
285 |
+
</details>
|
286 |
+
|
287 |
+
| statistics | train |
|
288 |
+
| :--------- | -----: |
|
289 |
+
| no. doc | 40 |
|
290 |
+
| len | 13586 |
|
291 |
+
| mean | 11.677 |
|
292 |
+
| std | 8.731 |
|
293 |
+
| min | 1 |
|
294 |
+
| max | 138 |
|
295 |
+
|
296 |
+
<details>
|
297 |
+
<summary>Histogram (split: train, 40 documents)</summary>
|
298 |
+
|
299 |
+
![slt_sciarg.png](img%2Fslt_sciarg.png)
|
300 |
+
|
301 |
+
</details>
|
302 |
+
|
303 |
+
#### Token length (tokens)
|
304 |
+
|
305 |
+
The token length is measured from the first token of the document to the last one.
|
306 |
+
|
307 |
+
We collect the following statistics: number of documents in the split (*no. doc*), mean of document token-length (*mean*), standard deviation of the length (*std*), minimum number of tokens in a document (*min*), and maximum number of tokens in a document (*max*).
|
308 |
+
We also present histograms in the collapsible, showing the distribution of these token lengths (x-axis; and unit-counts in y-axis), accordingly.
|
309 |
+
|
310 |
+
<details>
|
311 |
+
<summary>Command</summary>
|
312 |
+
|
313 |
+
```
|
314 |
+
python src/evaluate_documents.py dataset=sciarg_base metric=count_text_tokens
|
315 |
+
```
|
316 |
+
|
317 |
+
</details>
|
318 |
+
|
319 |
+
| statistics | train |
|
320 |
+
| :--------- | ------: |
|
321 |
+
| no. doc | 40 |
|
322 |
+
| mean | 10521.1 |
|
323 |
+
| std | 2472.2 |
|
324 |
+
| min | 6452 |
|
325 |
+
| max | 16421 |
|
326 |
+
|
327 |
+
<details>
|
328 |
+
<summary>Histogram (split: train, 40 documents)</summary>
|
329 |
+
|
330 |
+
![tl_sciarg.png](img%2Ftl_sciarg.png)
|
331 |
+
|
332 |
+
</details>
|
333 |
|
334 |
## Dataset Creation
|
335 |
|
img/rtd-label_sciarg.png
ADDED
Git LFS Details
|
img/slt_sciarg.png
ADDED
Git LFS Details
|
img/tl_sciarg.png
ADDED
Git LFS Details
|
requirements.txt
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
-
pie-datasets>=0.6.0,<0.
|
2 |
-
pie-modules>=0.10.8,<0.
|
3 |
networkx>=3.0.0,<4.0.0
|
|
|
1 |
+
pie-datasets>=0.6.0,<0.11.0
|
2 |
+
pie-modules>=0.10.8,<0.12.0
|
3 |
networkx>=3.0.0,<4.0.0
|