Datasets:
Tasks:
Token Classification
Sub-tasks:
coreference-resolution
Languages:
English
Size:
1K<n<10K
ArXiv:
License:
add dataset_info in dataset metadata
Browse files
README.md
CHANGED
@@ -19,6 +19,42 @@ task_categories:
|
|
19 |
task_ids:
|
20 |
- coreference-resolution
|
21 |
paperswithcode_id: gap
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for "gap"
|
@@ -198,4 +234,4 @@ The data fields are the same among all splits.
|
|
198 |
|
199 |
### Contributions
|
200 |
|
201 |
-
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
|
|
19 |
task_ids:
|
20 |
- coreference-resolution
|
21 |
paperswithcode_id: gap
|
22 |
+
dataset_info:
|
23 |
+
features:
|
24 |
+
- name: ID
|
25 |
+
dtype: string
|
26 |
+
- name: Text
|
27 |
+
dtype: string
|
28 |
+
- name: Pronoun
|
29 |
+
dtype: string
|
30 |
+
- name: Pronoun-offset
|
31 |
+
dtype: int32
|
32 |
+
- name: A
|
33 |
+
dtype: string
|
34 |
+
- name: A-offset
|
35 |
+
dtype: int32
|
36 |
+
- name: A-coref
|
37 |
+
dtype: bool
|
38 |
+
- name: B
|
39 |
+
dtype: string
|
40 |
+
- name: B-offset
|
41 |
+
dtype: int32
|
42 |
+
- name: B-coref
|
43 |
+
dtype: bool
|
44 |
+
- name: URL
|
45 |
+
dtype: string
|
46 |
+
splits:
|
47 |
+
- name: test
|
48 |
+
num_bytes: 1090462
|
49 |
+
num_examples: 2000
|
50 |
+
- name: train
|
51 |
+
num_bytes: 1095623
|
52 |
+
num_examples: 2000
|
53 |
+
- name: validation
|
54 |
+
num_bytes: 248329
|
55 |
+
num_examples: 454
|
56 |
+
download_size: 2401971
|
57 |
+
dataset_size: 2434414
|
58 |
---
|
59 |
|
60 |
# Dataset Card for "gap"
|
|
|
234 |
|
235 |
### Contributions
|
236 |
|
237 |
+
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
|