Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Sub-tasks:
multi-class-classification
Languages:
English
Size:
10K - 100K
License:
fkdosilovic
commited on
Commit
•
cb59848
1
Parent(s):
193d9b3
Update README.md
Browse files
README.md
CHANGED
@@ -20,13 +20,33 @@ task_ids:
|
|
20 |
- multi-class-classification
|
21 |
---
|
22 |
|
23 |
-
|
24 |
|
25 |
-
Dataset
|
26 |
|
27 |
-
|
|
|
|
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types.
|
32 |
|
|
|
20 |
- multi-class-classification
|
21 |
---
|
22 |
|
23 |
+
# Dataset Card for DocEE Dataset
|
24 |
|
25 |
+
## Dataset Description
|
26 |
|
27 |
+
- **Homepage:**
|
28 |
+
- **Repository:** [DocEE Dataset repository](https://github.com/tongmeihan1995/docee)
|
29 |
+
- **Paper:** [DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction](https://aclanthology.org/2022.naacl-main.291/)
|
30 |
|
31 |
+
### Dataset Summary
|
32 |
+
|
33 |
+
DocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction.
|
34 |
+
|
35 |
+
### Data Fields
|
36 |
+
|
37 |
+
- `title`: TODO
|
38 |
+
- `text`: TODO
|
39 |
+
- `event_type`: TODO
|
40 |
+
- `date`: TODO
|
41 |
+
- `metadata`: TODO
|
42 |
+
|
43 |
+
**Note: this repo contains only event detection portion of the dataset.**
|
44 |
+
|
45 |
+
### Data Splits
|
46 |
+
|
47 |
+
The dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types.
|
48 |
+
|
49 |
+
#### Differences from the original split(s)
|
50 |
|
51 |
Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types.
|
52 |
|