Datasets:
Tasks:
Token Classification
Modalities:
Text
Sub-tasks:
coreference-resolution
Languages:
English
Size:
< 1K
License:
parquet-converter
commited on
Commit
•
61d4e99
1
Parent(s):
a41f973
Update parquet files
Browse files- .gitattributes +0 -27
- README.md +0 -174
- data.tar +0 -0
- default/scico-test.parquet +3 -0
- default/scico-train.parquet +3 -0
- default/scico-validation.parquet +3 -0
- scico.py +0 -86
.gitattributes
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,174 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- domain experts
|
4 |
-
language:
|
5 |
-
- en
|
6 |
-
license:
|
7 |
-
- apache-2.0
|
8 |
-
multilinguality:
|
9 |
-
- monolingual
|
10 |
-
task_categories:
|
11 |
-
- structure-prediction
|
12 |
-
task_ids:
|
13 |
-
- cross-document-coreference-resolution
|
14 |
-
- coreference-resolution
|
15 |
-
paperswithcode_id: scico
|
16 |
-
---
|
17 |
-
|
18 |
-
# Dataset Card for SciCo
|
19 |
-
|
20 |
-
## Table of Contents
|
21 |
-
- [Dataset Description](#dataset-description)
|
22 |
-
- [Dataset Summary](#dataset-summary)
|
23 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
24 |
-
- [Languages](#languages)
|
25 |
-
- [Dataset Structure](#dataset-structure)
|
26 |
-
- [Data Instances](#data-instances)
|
27 |
-
- [Data Fields](#data-fields)
|
28 |
-
- [Data Splits](#data-splits)
|
29 |
-
- [Dataset Creation](#dataset-creation)
|
30 |
-
- [Curation Rationale](#curation-rationale)
|
31 |
-
- [Source Data](#source-data)
|
32 |
-
- [Annotations](#annotations)
|
33 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
34 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
35 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
36 |
-
- [Discussion of Biases](#discussion-of-biases)
|
37 |
-
- [Other Known Limitations](#other-known-limitations)
|
38 |
-
- [Additional Information](#additional-information)
|
39 |
-
- [Dataset Curators](#dataset-curators)
|
40 |
-
- [Licensing Information](#licensing-information)
|
41 |
-
- [Citation Information](#citation-information)
|
42 |
-
- [Contributions](#contributions)
|
43 |
-
|
44 |
-
## Dataset Description
|
45 |
-
|
46 |
-
- **Homepage:** [SciCo homepage](https://scico.apps.allenai.org/)
|
47 |
-
- **Repository:** [SciCo repository](https://github.com/ariecattan/scico)
|
48 |
-
- **Paper:** [SciCo: Hierarchical Cross-document Coreference for Scientific Concepts](https://openreview.net/forum?id=OFLbgUP04nC)
|
49 |
-
- **Point of Contact:** [Arie Cattan](arie.cattan@gmail.com)
|
50 |
-
|
51 |
-
### Dataset Summary
|
52 |
-
|
53 |
-
|
54 |
-
SciCo consists of clusters of mentions in context and a hierarchy over them.
|
55 |
-
The corpus is drawn from computer science papers, and the concept mentions are methods and tasks from across CS.
|
56 |
-
Scientific concepts pose significant challenges: they often take diverse forms (e.g., class-conditional image
|
57 |
-
synthesis and categorical image generation) or are ambiguous (e.g., network architecture in AI vs.
|
58 |
-
systems research).
|
59 |
-
To build SciCo, we develop a new candidate generation
|
60 |
-
approach built on three resources: a low-coverage KB ([https://paperswithcode.com/](https://paperswithcode.com/)), a noisy hypernym extractor, and curated
|
61 |
-
candidates.
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
### Supported Tasks and Leaderboards
|
66 |
-
|
67 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
68 |
-
|
69 |
-
### Languages
|
70 |
-
|
71 |
-
The text in the dataset is in English.
|
72 |
-
|
73 |
-
## Dataset Structure
|
74 |
-
|
75 |
-
### Data Instances
|
76 |
-
|
77 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
78 |
-
|
79 |
-
|
80 |
-
### Data Fields
|
81 |
-
|
82 |
-
* `flatten_tokens`: a single list of all tokens in the topic
|
83 |
-
* `flatten_mentions`: array of mentions, each mention is represented by [start, end, cluster_id]
|
84 |
-
* `tokens`: array of paragraphs
|
85 |
-
* `doc_ids`: doc_id of each paragraph in `tokens`
|
86 |
-
* `metadata`: metadata of each doc_id
|
87 |
-
* `sentences`: sentences boundaries for each paragraph in `tokens` [start, end]
|
88 |
-
* `mentions`: array of mentions, each mention is represented by [paragraph_id, start, end, cluster_id]
|
89 |
-
* `relations`: array of binary relations between cluster_ids [parent, child]
|
90 |
-
* `id`: id of the topic
|
91 |
-
* `hard_10` and `hard_20` (only in the test set): flag for 10% or 20% hardest topics based on Levenshtein similarity.
|
92 |
-
* `source`: source of this topic PapersWithCode (pwc), hypernym or curated.
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
### Data Splits
|
97 |
-
|
98 |
-
| |Train |Validation|Test |
|
99 |
-
|--------------------|-----:|---------:|----:|
|
100 |
-
|Topic | 221| 100| 200|
|
101 |
-
|Documents | 9013| 4120| 8237|
|
102 |
-
|Mentions | 10925| 4874|10424|
|
103 |
-
|Clusters | 4080| 1867| 3711|
|
104 |
-
|Relations | 2514| 1747| 2379|
|
105 |
-
|
106 |
-
## Dataset Creation
|
107 |
-
|
108 |
-
### Curation Rationale
|
109 |
-
|
110 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
111 |
-
|
112 |
-
### Source Data
|
113 |
-
|
114 |
-
#### Initial Data Collection and Normalization
|
115 |
-
|
116 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
117 |
-
|
118 |
-
#### Who are the source language producers?
|
119 |
-
|
120 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
121 |
-
|
122 |
-
### Annotations
|
123 |
-
|
124 |
-
#### Annotation process
|
125 |
-
|
126 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
127 |
-
|
128 |
-
#### Who are the annotators?
|
129 |
-
|
130 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
131 |
-
|
132 |
-
### Personal and Sensitive Information
|
133 |
-
|
134 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
135 |
-
|
136 |
-
## Considerations for Using the Data
|
137 |
-
|
138 |
-
### Social Impact of Dataset
|
139 |
-
|
140 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
141 |
-
|
142 |
-
### Discussion of Biases
|
143 |
-
|
144 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
145 |
-
|
146 |
-
### Other Known Limitations
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
## Additional Information
|
151 |
-
|
152 |
-
### Dataset Curators
|
153 |
-
|
154 |
-
This dataset was initially created by Arie Cattan, Sophie Johnson, Daniel Weld, Ido Dagan, Iz Beltagy, Doug Downey and Tom Hope, while Arie was intern at Allen Institute of Artificial Intelligence.
|
155 |
-
|
156 |
-
### Licensing Information
|
157 |
-
|
158 |
-
This dataset is distributed under [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0).
|
159 |
-
|
160 |
-
### Citation Information
|
161 |
-
|
162 |
-
```
|
163 |
-
@inproceedings{
|
164 |
-
cattan2021scico,
|
165 |
-
title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
|
166 |
-
author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
|
167 |
-
booktitle={3rd Conference on Automated Knowledge Base Construction},
|
168 |
-
year={2021},
|
169 |
-
url={https://openreview.net/forum?id=OFLbgUP04nC}
|
170 |
-
}
|
171 |
-
```
|
172 |
-
### Contributions
|
173 |
-
|
174 |
-
Thanks to [@ariecattan](https://github.com/ariecattan) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data.tar
DELETED
Binary file (9.99 MB)
|
|
default/scico-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7991804b59878f18697348b382a6f7e568c373ea843fdaab823d3b2f0c019414
|
3 |
+
size 4701609
|
default/scico-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6857c87aea132f105cd684ae0c0a517b64ac7fe1863b79f91ded64e32508ba1d
|
3 |
+
size 4964901
|
default/scico-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9cce793b5076c20a4115a9b80cb4e39ab6d7e1dc2f83fc6dea71d04d7718456b
|
3 |
+
size 2340474
|
scico.py
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
"""SciCo"""
|
2 |
-
|
3 |
-
import os
|
4 |
-
from datasets.arrow_dataset import DatasetTransformationNotAllowedError
|
5 |
-
from datasets.utils import metadata
|
6 |
-
import jsonlines
|
7 |
-
import datasets
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
_CITATION = """\
|
13 |
-
@inproceedings{
|
14 |
-
cattan2021scico,
|
15 |
-
title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
|
16 |
-
author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
|
17 |
-
booktitle={3rd Conference on Automated Knowledge Base Construction},
|
18 |
-
year={2021},
|
19 |
-
url={https://openreview.net/forum?id=OFLbgUP04nC}
|
20 |
-
}
|
21 |
-
"""
|
22 |
-
|
23 |
-
_DESCRIPTION = """\
|
24 |
-
SciCo is a dataset for hierarchical cross-document coreference resolution
|
25 |
-
over scientific papers in the CS domain.
|
26 |
-
"""
|
27 |
-
|
28 |
-
_DATA_URL = "./data.tar"
|
29 |
-
|
30 |
-
class Scico(datasets.GeneratorBasedBuilder):
|
31 |
-
def _info(self):
|
32 |
-
return datasets.DatasetInfo(
|
33 |
-
description=_DESCRIPTION,
|
34 |
-
homepage="https://scico.apps.allenai.org/",
|
35 |
-
features=datasets.Features(
|
36 |
-
{
|
37 |
-
"flatten_tokens": datasets.features.Sequence(datasets.features.Value("string")),
|
38 |
-
"flatten_mentions": datasets.features.Sequence(datasets.features.Sequence(datasets.features.Value("int32"), length=3)),
|
39 |
-
"tokens": datasets.features.Sequence(datasets.features.Sequence(datasets.features.Value("string"))),
|
40 |
-
"doc_ids": datasets.features.Sequence(datasets.features.Value("int32")),
|
41 |
-
"metadata": datasets.features.Sequence(
|
42 |
-
{
|
43 |
-
"title": datasets.features.Value("string"),
|
44 |
-
"paper_sha": datasets.features.Value("string"),
|
45 |
-
"fields_of_study": datasets.features.Value("string"),
|
46 |
-
"Year": datasets.features.Value("string"),
|
47 |
-
"BookTitle": datasets.features.Value("string"),
|
48 |
-
"url": datasets.features.Value("string")
|
49 |
-
}
|
50 |
-
),
|
51 |
-
"sentences": datasets.features.Sequence(datasets.features.Sequence(datasets.features.Sequence(datasets.features.Value("int32")))),
|
52 |
-
"mentions": datasets.features.Sequence(datasets.features.Sequence(datasets.features.Value("int32"), length=4)),
|
53 |
-
"relations": datasets.features.Sequence(datasets.features.Sequence(datasets.features.Value("int32"), length=2)),
|
54 |
-
"id": datasets.Value("int32"),
|
55 |
-
"source": datasets.Value("string"),
|
56 |
-
"hard_10": datasets.features.Value("bool"),
|
57 |
-
"hard_20": datasets.features.Value("bool"),
|
58 |
-
"curated": datasets.features.Value("bool")
|
59 |
-
}
|
60 |
-
),
|
61 |
-
supervised_keys=None,
|
62 |
-
citation = _CITATION)
|
63 |
-
|
64 |
-
|
65 |
-
def _split_generators(self, dl_manager):
|
66 |
-
data_dir = dl_manager.download_and_extract(_DATA_URL)
|
67 |
-
return [
|
68 |
-
datasets.SplitGenerator(
|
69 |
-
name=datasets.Split.TEST, gen_kwargs={"filepath": os.path.join(data_dir, "test.jsonl")}
|
70 |
-
),
|
71 |
-
datasets.SplitGenerator(
|
72 |
-
name=datasets.Split.VALIDATION, gen_kwargs={"filepath": os.path.join(data_dir, "dev.jsonl")}
|
73 |
-
),
|
74 |
-
datasets.SplitGenerator(
|
75 |
-
name=datasets.Split.TRAIN, gen_kwargs={"filepath": os.path.join(data_dir, "train.jsonl")}
|
76 |
-
),
|
77 |
-
]
|
78 |
-
|
79 |
-
def _generate_examples(self, filepath):
|
80 |
-
"""This function returns the examples in the raw (text) form."""
|
81 |
-
with jsonlines.open(filepath, 'r') as f:
|
82 |
-
for i, topic in enumerate(f):
|
83 |
-
topic['hard_10'] = topic['hard_10'] if 'hard_10' in topic else False
|
84 |
-
topic['hard_20'] = topic['hard_20'] if 'hard_20' in topic else False
|
85 |
-
topic["curated"] = topic["curated"] if "curated" in topic else False
|
86 |
-
yield i, topic
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|