Datasets:
Tasks:
Token Classification
Modalities:
Text
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
100K - 1M
Tags:
structure-prediction
License:
parquet-converter
commited on
Commit
•
b918e2b
1
Parent(s):
dbf9dd3
Update parquet files
Browse files- .gitattributes +9 -0
- README.md +0 -209
- few-nerd.py +0 -317
- inter/few-nerd-test.parquet +3 -0
- inter/few-nerd-train.parquet +3 -0
- inter/few-nerd-validation.parquet +3 -0
- intra/few-nerd-test.parquet +3 -0
- intra/few-nerd-train.parquet +3 -0
- intra/few-nerd-validation.parquet +3 -0
- supervised/few-nerd-test.parquet +3 -0
- supervised/few-nerd-train.parquet +3 -0
- supervised/few-nerd-validation.parquet +3 -0
.gitattributes
CHANGED
@@ -14,3 +14,12 @@
|
|
14 |
*.pb filter=lfs diff=lfs merge=lfs -text
|
15 |
*.pt filter=lfs diff=lfs merge=lfs -text
|
16 |
*.pth filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
*.pb filter=lfs diff=lfs merge=lfs -text
|
15 |
*.pt filter=lfs diff=lfs merge=lfs -text
|
16 |
*.pth filter=lfs diff=lfs merge=lfs -text
|
17 |
+
inter/few-nerd-train.parquet filter=lfs diff=lfs merge=lfs -text
|
18 |
+
inter/few-nerd-validation.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
inter/few-nerd-test.parquet filter=lfs diff=lfs merge=lfs -text
|
20 |
+
intra/few-nerd-train.parquet filter=lfs diff=lfs merge=lfs -text
|
21 |
+
intra/few-nerd-validation.parquet filter=lfs diff=lfs merge=lfs -text
|
22 |
+
intra/few-nerd-test.parquet filter=lfs diff=lfs merge=lfs -text
|
23 |
+
supervised/few-nerd-train.parquet filter=lfs diff=lfs merge=lfs -text
|
24 |
+
supervised/few-nerd-validation.parquet filter=lfs diff=lfs merge=lfs -text
|
25 |
+
supervised/few-nerd-test.parquet filter=lfs diff=lfs merge=lfs -text
|
README.md
DELETED
@@ -1,209 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
-
language_creators:
|
5 |
-
- found
|
6 |
-
language:
|
7 |
-
- en
|
8 |
-
license:
|
9 |
-
- cc-by-sa-4.0
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
size_categories:
|
13 |
-
- 100K<n<1M
|
14 |
-
source_datasets:
|
15 |
-
- extended|wikipedia
|
16 |
-
task_categories:
|
17 |
-
- other
|
18 |
-
task_ids:
|
19 |
-
- named-entity-recognition
|
20 |
-
paperswithcode_id: few-nerd
|
21 |
-
pretty_name: Few-NERD
|
22 |
-
tags:
|
23 |
-
- structure-prediction
|
24 |
-
---
|
25 |
-
|
26 |
-
# Dataset Card for "Few-NERD"
|
27 |
-
|
28 |
-
## Table of Contents
|
29 |
-
- [Dataset Description](#dataset-description)
|
30 |
-
- [Dataset Summary](#dataset-summary)
|
31 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
32 |
-
- [Languages](#languages)
|
33 |
-
- [Dataset Structure](#dataset-structure)
|
34 |
-
- [Data Instances](#data-instances)
|
35 |
-
- [Data Fields](#data-fields)
|
36 |
-
- [Data Splits](#data-splits)
|
37 |
-
- [Dataset Creation](#dataset-creation)
|
38 |
-
- [Curation Rationale](#curation-rationale)
|
39 |
-
- [Source Data](#source-data)
|
40 |
-
- [Annotations](#annotations)
|
41 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
42 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
43 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
44 |
-
- [Discussion of Biases](#discussion-of-biases)
|
45 |
-
- [Other Known Limitations](#other-known-limitations)
|
46 |
-
- [Additional Information](#additional-information)
|
47 |
-
- [Dataset Curators](#dataset-curators)
|
48 |
-
- [Licensing Information](#licensing-information)
|
49 |
-
- [Citation Information](#citation-information)
|
50 |
-
- [Contributions](#contributions)
|
51 |
-
|
52 |
-
## Dataset Description
|
53 |
-
|
54 |
-
- **Homepage:** [https://ningding97.github.io/fewnerd/](https://ningding97.github.io/fewnerd/)
|
55 |
-
- **Repository:** [https://github.com/thunlp/Few-NERD](https://github.com/thunlp/Few-NERD)
|
56 |
-
- **Paper:** [https://aclanthology.org/2021.acl-long.248/](https://aclanthology.org/2021.acl-long.248/)
|
57 |
-
- **Point of Contact:** See [https://ningding97.github.io/fewnerd/](https://ningding97.github.io/fewnerd/)
|
58 |
-
|
59 |
-
### Dataset Summary
|
60 |
-
|
61 |
-
This script is for loading the Few-NERD dataset from https://ningding97.github.io/fewnerd/.
|
62 |
-
|
63 |
-
Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities, and 4,601,223 tokens. Three benchmark tasks are built, one is supervised (Few-NERD (SUP)) and the other two are few-shot (Few-NERD (INTRA) and Few-NERD (INTER)).
|
64 |
-
|
65 |
-
NER tags use the `IO` tagging scheme. The original data uses a 2-column CoNLL-style format, with empty lines to separate sentences. DOCSTART information is not provided since the sentences are randomly ordered.
|
66 |
-
|
67 |
-
For more details see https://ningding97.github.io/fewnerd/ and https://aclanthology.org/2021.acl-long.248/.
|
68 |
-
|
69 |
-
### Supported Tasks and Leaderboards
|
70 |
-
|
71 |
-
- **Tasks:** Named Entity Recognition, Few-shot NER
|
72 |
-
- **Leaderboards:**
|
73 |
-
- https://ningding97.github.io/fewnerd/
|
74 |
-
- named-entity-recognition:https://paperswithcode.com/sota/named-entity-recognition-on-few-nerd-sup
|
75 |
-
- other-few-shot-ner:https://paperswithcode.com/sota/few-shot-ner-on-few-nerd-intra
|
76 |
-
- other-few-shot-ner:https://paperswithcode.com/sota/few-shot-ner-on-few-nerd-inter
|
77 |
-
|
78 |
-
|
79 |
-
### Languages
|
80 |
-
|
81 |
-
English
|
82 |
-
|
83 |
-
## Dataset Structure
|
84 |
-
|
85 |
-
### Data Instances
|
86 |
-
|
87 |
-
- **Size of downloaded dataset files:**
|
88 |
-
- `super`: 14.6 MB
|
89 |
-
- `intra`: 11.4 MB
|
90 |
-
- `inter`: 11.5 MB
|
91 |
-
|
92 |
-
- **Size of the generated dataset:**
|
93 |
-
- `super`: 116.9 MB
|
94 |
-
- `intra`: 106.2 MB
|
95 |
-
- `inter`: 106.2 MB
|
96 |
-
|
97 |
-
- **Total amount of disk used:** 366.8 MB
|
98 |
-
|
99 |
-
|
100 |
-
An example of 'train' looks as follows.
|
101 |
-
|
102 |
-
```json
|
103 |
-
{
|
104 |
-
'id': '1',
|
105 |
-
'tokens': ['It', 'starred', 'Hicks', "'s", 'wife', ',', 'Ellaline', 'Terriss', 'and', 'Edmund', 'Payne', '.'],
|
106 |
-
'ner_tags': [0, 0, 7, 0, 0, 0, 7, 7, 0, 7, 7, 0],
|
107 |
-
'fine_ner_tags': [0, 0, 51, 0, 0, 0, 50, 50, 0, 50, 50, 0]
|
108 |
-
}
|
109 |
-
```
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
### Data Fields
|
114 |
-
|
115 |
-
The data fields are the same among all splits.
|
116 |
-
|
117 |
-
- `id`: a `string` feature.
|
118 |
-
- `tokens`: a `list` of `string` features.
|
119 |
-
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `art` (1), `building` (2), `event` (3), `location` (4), `organization` (5), `other`(6), `person` (7), `product` (8)
|
120 |
-
- `fine_ner_tags`: a `list` of fine-grained classification labels, with possible values including `O` (0), `art-broadcastprogram` (1), `art-film` (2), ...
|
121 |
-
|
122 |
-
### Data Splits
|
123 |
-
|
124 |
-
| Task | Train | Dev | Test |
|
125 |
-
| ----- | ------ | ----- | ---- |
|
126 |
-
| SUP | 131767 | 18824 | 37648 |
|
127 |
-
| INTRA | 99519 | 19358 | 44059 |
|
128 |
-
| INTER | 130112 | 18817 | 14007 |
|
129 |
-
|
130 |
-
## Dataset Creation
|
131 |
-
|
132 |
-
### Curation Rationale
|
133 |
-
|
134 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
135 |
-
|
136 |
-
### Source Data
|
137 |
-
|
138 |
-
#### Initial Data Collection and Normalization
|
139 |
-
|
140 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
141 |
-
|
142 |
-
#### Who are the source language producers?
|
143 |
-
|
144 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
145 |
-
|
146 |
-
### Annotations
|
147 |
-
|
148 |
-
#### Annotation process
|
149 |
-
|
150 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
151 |
-
|
152 |
-
#### Who are the annotators?
|
153 |
-
|
154 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
155 |
-
|
156 |
-
### Personal and Sensitive Information
|
157 |
-
|
158 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
159 |
-
|
160 |
-
## Considerations for Using the Data
|
161 |
-
|
162 |
-
### Social Impact of Dataset
|
163 |
-
|
164 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
165 |
-
|
166 |
-
### Discussion of Biases
|
167 |
-
|
168 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
169 |
-
|
170 |
-
### Other Known Limitations
|
171 |
-
|
172 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
173 |
-
|
174 |
-
## Additional Information
|
175 |
-
|
176 |
-
### Dataset Curators
|
177 |
-
|
178 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
179 |
-
|
180 |
-
### Licensing Information
|
181 |
-
|
182 |
-
[CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
|
183 |
-
|
184 |
-
### Citation Information
|
185 |
-
|
186 |
-
```
|
187 |
-
@inproceedings{ding-etal-2021-nerd,
|
188 |
-
title = "Few-{NERD}: A Few-shot Named Entity Recognition Dataset",
|
189 |
-
author = "Ding, Ning and
|
190 |
-
Xu, Guangwei and
|
191 |
-
Chen, Yulin and
|
192 |
-
Wang, Xiaobin and
|
193 |
-
Han, Xu and
|
194 |
-
Xie, Pengjun and
|
195 |
-
Zheng, Haitao and
|
196 |
-
Liu, Zhiyuan",
|
197 |
-
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
|
198 |
-
month = aug,
|
199 |
-
year = "2021",
|
200 |
-
address = "Online",
|
201 |
-
publisher = "Association for Computational Linguistics",
|
202 |
-
url = "https://aclanthology.org/2021.acl-long.248",
|
203 |
-
doi = "10.18653/v1/2021.acl-long.248",
|
204 |
-
pages = "3198--3213",
|
205 |
-
}
|
206 |
-
```
|
207 |
-
|
208 |
-
|
209 |
-
### Contributions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
few-nerd.py
DELETED
@@ -1,317 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import json
|
3 |
-
import datasets
|
4 |
-
from tqdm import tqdm
|
5 |
-
|
6 |
-
|
7 |
-
_CITATION = """
|
8 |
-
@inproceedings{ding2021few,
|
9 |
-
title={Few-NERD: A Few-Shot Named Entity Recognition Dataset},
|
10 |
-
author={Ding, Ning and Xu, Guangwei and Chen, Yulin, and Wang, Xiaobin and Han, Xu and Xie,
|
11 |
-
Pengjun and Zheng, Hai-Tao and Liu, Zhiyuan},
|
12 |
-
booktitle={ACL-IJCNLP},
|
13 |
-
year={2021}
|
14 |
-
}
|
15 |
-
"""
|
16 |
-
|
17 |
-
_DESCRIPTION = """
|
18 |
-
Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset,
|
19 |
-
which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities
|
20 |
-
and 4,601,223 tokens. Three benchmark tasks are built, one is supervised: Few-NERD (SUP) and the
|
21 |
-
other two are few-shot: Few-NERD (INTRA) and Few-NERD (INTER).
|
22 |
-
"""
|
23 |
-
|
24 |
-
_LICENSE = "CC BY-SA 4.0"
|
25 |
-
|
26 |
-
# the original data files (zip of .txt) can be downloaded from tsinghua cloud
|
27 |
-
_URLs = {
|
28 |
-
"supervised": "https://cloud.tsinghua.edu.cn/f/09265750ae6340429827/?dl=1",
|
29 |
-
"intra": "https://cloud.tsinghua.edu.cn/f/a0d3efdebddd4412b07c/?dl=1",
|
30 |
-
"inter": "https://cloud.tsinghua.edu.cn/f/165693d5e68b43558f9b/?dl=1",
|
31 |
-
}
|
32 |
-
|
33 |
-
# the label ids, for coarse(NER_TAGS_DICT) and fine(FINE_NER_TAGS_DICT)
|
34 |
-
NER_TAGS_DICT = {
|
35 |
-
"O": 0,
|
36 |
-
"art": 1,
|
37 |
-
"building": 2,
|
38 |
-
"event": 3,
|
39 |
-
"location": 4,
|
40 |
-
"organization": 5,
|
41 |
-
"other": 6,
|
42 |
-
"person": 7,
|
43 |
-
"product": 8,
|
44 |
-
}
|
45 |
-
|
46 |
-
FINE_NER_TAGS_DICT = {
|
47 |
-
"O": 0,
|
48 |
-
"art-broadcastprogram": 1,
|
49 |
-
"art-film": 2,
|
50 |
-
"art-music": 3,
|
51 |
-
"art-other": 4,
|
52 |
-
"art-painting": 5,
|
53 |
-
"art-writtenart": 6,
|
54 |
-
"building-airport": 7,
|
55 |
-
"building-hospital": 8,
|
56 |
-
"building-hotel": 9,
|
57 |
-
"building-library": 10,
|
58 |
-
"building-other": 11,
|
59 |
-
"building-restaurant": 12,
|
60 |
-
"building-sportsfacility": 13,
|
61 |
-
"building-theater": 14,
|
62 |
-
"event-attack/battle/war/militaryconflict": 15,
|
63 |
-
"event-disaster": 16,
|
64 |
-
"event-election": 17,
|
65 |
-
"event-other": 18,
|
66 |
-
"event-protest": 19,
|
67 |
-
"event-sportsevent": 20,
|
68 |
-
"location-GPE": 21,
|
69 |
-
"location-bodiesofwater": 22,
|
70 |
-
"location-island": 23,
|
71 |
-
"location-mountain": 24,
|
72 |
-
"location-other": 25,
|
73 |
-
"location-park": 26,
|
74 |
-
"location-road/railway/highway/transit": 27,
|
75 |
-
"organization-company": 28,
|
76 |
-
"organization-education": 29,
|
77 |
-
"organization-government/governmentagency": 30,
|
78 |
-
"organization-media/newspaper": 31,
|
79 |
-
"organization-other": 32,
|
80 |
-
"organization-politicalparty": 33,
|
81 |
-
"organization-religion": 34,
|
82 |
-
"organization-showorganization": 35,
|
83 |
-
"organization-sportsleague": 36,
|
84 |
-
"organization-sportsteam": 37,
|
85 |
-
"other-astronomything": 38,
|
86 |
-
"other-award": 39,
|
87 |
-
"other-biologything": 40,
|
88 |
-
"other-chemicalthing": 41,
|
89 |
-
"other-currency": 42,
|
90 |
-
"other-disease": 43,
|
91 |
-
"other-educationaldegree": 44,
|
92 |
-
"other-god": 45,
|
93 |
-
"other-language": 46,
|
94 |
-
"other-law": 47,
|
95 |
-
"other-livingthing": 48,
|
96 |
-
"other-medical": 49,
|
97 |
-
"person-actor": 50,
|
98 |
-
"person-artist/author": 51,
|
99 |
-
"person-athlete": 52,
|
100 |
-
"person-director": 53,
|
101 |
-
"person-other": 54,
|
102 |
-
"person-politician": 55,
|
103 |
-
"person-scholar": 56,
|
104 |
-
"person-soldier": 57,
|
105 |
-
"product-airplane": 58,
|
106 |
-
"product-car": 59,
|
107 |
-
"product-food": 60,
|
108 |
-
"product-game": 61,
|
109 |
-
"product-other": 62,
|
110 |
-
"product-ship": 63,
|
111 |
-
"product-software": 64,
|
112 |
-
"product-train": 65,
|
113 |
-
"product-weapon": 66,
|
114 |
-
}
|
115 |
-
|
116 |
-
|
117 |
-
class FewNERDConfig(datasets.BuilderConfig):
|
118 |
-
"""BuilderConfig for FewNERD"""
|
119 |
-
|
120 |
-
def __init__(self, **kwargs):
|
121 |
-
"""BuilderConfig for FewNERD.
|
122 |
-
|
123 |
-
Args:
|
124 |
-
**kwargs: keyword arguments forwarded to super.
|
125 |
-
"""
|
126 |
-
super(FewNERDConfig, self).__init__(**kwargs)
|
127 |
-
|
128 |
-
|
129 |
-
class FewNERD(datasets.GeneratorBasedBuilder):
|
130 |
-
BUILDER_CONFIGS = [
|
131 |
-
FewNERDConfig(name="supervised", description="Fully supervised setting."),
|
132 |
-
FewNERDConfig(
|
133 |
-
name="inter",
|
134 |
-
description="Few-shot setting. Each file contains all 8 coarse "
|
135 |
-
"types but different fine-grained types.",
|
136 |
-
),
|
137 |
-
FewNERDConfig(
|
138 |
-
name="intra", description="Few-shot setting. Randomly split by coarse type."
|
139 |
-
),
|
140 |
-
]
|
141 |
-
|
142 |
-
def _info(self):
|
143 |
-
return datasets.DatasetInfo(
|
144 |
-
description=_DESCRIPTION,
|
145 |
-
features=datasets.Features(
|
146 |
-
{
|
147 |
-
"id": datasets.Value("string"),
|
148 |
-
"tokens": datasets.features.Sequence(datasets.Value("string")),
|
149 |
-
"ner_tags": datasets.features.Sequence(
|
150 |
-
datasets.features.ClassLabel(
|
151 |
-
names=[
|
152 |
-
"O",
|
153 |
-
"art",
|
154 |
-
"building",
|
155 |
-
"event",
|
156 |
-
"location",
|
157 |
-
"organization",
|
158 |
-
"other",
|
159 |
-
"person",
|
160 |
-
"product",
|
161 |
-
]
|
162 |
-
)
|
163 |
-
),
|
164 |
-
"fine_ner_tags": datasets.Sequence(
|
165 |
-
datasets.features.ClassLabel(
|
166 |
-
names=[
|
167 |
-
"O",
|
168 |
-
"art-broadcastprogram",
|
169 |
-
"art-film",
|
170 |
-
"art-music",
|
171 |
-
"art-other",
|
172 |
-
"art-painting",
|
173 |
-
"art-writtenart",
|
174 |
-
"building-airport",
|
175 |
-
"building-hospital",
|
176 |
-
"building-hotel",
|
177 |
-
"building-library",
|
178 |
-
"building-other",
|
179 |
-
"building-restaurant",
|
180 |
-
"building-sportsfacility",
|
181 |
-
"building-theater",
|
182 |
-
"event-attack/battle/war/militaryconflict",
|
183 |
-
"event-disaster",
|
184 |
-
"event-election",
|
185 |
-
"event-other",
|
186 |
-
"event-protest",
|
187 |
-
"event-sportsevent",
|
188 |
-
"location-GPE",
|
189 |
-
"location-bodiesofwater",
|
190 |
-
"location-island",
|
191 |
-
"location-mountain",
|
192 |
-
"location-other",
|
193 |
-
"location-park",
|
194 |
-
"location-road/railway/highway/transit",
|
195 |
-
"organization-company",
|
196 |
-
"organization-education",
|
197 |
-
"organization-government/governmentagency",
|
198 |
-
"organization-media/newspaper",
|
199 |
-
"organization-other",
|
200 |
-
"organization-politicalparty",
|
201 |
-
"organization-religion",
|
202 |
-
"organization-showorganization",
|
203 |
-
"organization-sportsleague",
|
204 |
-
"organization-sportsteam",
|
205 |
-
"other-astronomything",
|
206 |
-
"other-award",
|
207 |
-
"other-biologything",
|
208 |
-
"other-chemicalthing",
|
209 |
-
"other-currency",
|
210 |
-
"other-disease",
|
211 |
-
"other-educationaldegree",
|
212 |
-
"other-god",
|
213 |
-
"other-language",
|
214 |
-
"other-law",
|
215 |
-
"other-livingthing",
|
216 |
-
"other-medical",
|
217 |
-
"person-actor",
|
218 |
-
"person-artist/author",
|
219 |
-
"person-athlete",
|
220 |
-
"person-director",
|
221 |
-
"person-other",
|
222 |
-
"person-politician",
|
223 |
-
"person-scholar",
|
224 |
-
"person-soldier",
|
225 |
-
"product-airplane",
|
226 |
-
"product-car",
|
227 |
-
"product-food",
|
228 |
-
"product-game",
|
229 |
-
"product-other",
|
230 |
-
"product-ship",
|
231 |
-
"product-software",
|
232 |
-
"product-train",
|
233 |
-
"product-weapon",
|
234 |
-
]
|
235 |
-
)
|
236 |
-
),
|
237 |
-
}
|
238 |
-
),
|
239 |
-
supervised_keys=None,
|
240 |
-
homepage="https://ningding97.github.io/fewnerd/",
|
241 |
-
citation=_CITATION,
|
242 |
-
)
|
243 |
-
|
244 |
-
def _split_generators(self, dl_manager):
|
245 |
-
"""Returns SplitGenerators."""
|
246 |
-
url_to_download = dl_manager.download_and_extract(_URLs[self.config.name])
|
247 |
-
return [
|
248 |
-
datasets.SplitGenerator(
|
249 |
-
name=datasets.Split.TRAIN,
|
250 |
-
gen_kwargs={
|
251 |
-
"filepath": os.path.join(
|
252 |
-
url_to_download,
|
253 |
-
self.config.name,
|
254 |
-
"train.txt",
|
255 |
-
)
|
256 |
-
},
|
257 |
-
),
|
258 |
-
datasets.SplitGenerator(
|
259 |
-
name=datasets.Split.VALIDATION,
|
260 |
-
gen_kwargs={
|
261 |
-
"filepath": os.path.join(
|
262 |
-
url_to_download, self.config.name, "dev.txt"
|
263 |
-
)
|
264 |
-
},
|
265 |
-
),
|
266 |
-
datasets.SplitGenerator(
|
267 |
-
name=datasets.Split.TEST,
|
268 |
-
gen_kwargs={
|
269 |
-
"filepath": os.path.join(
|
270 |
-
url_to_download, self.config.name, "test.txt"
|
271 |
-
)
|
272 |
-
},
|
273 |
-
),
|
274 |
-
]
|
275 |
-
|
276 |
-
def _generate_examples(self, filepath=None):
|
277 |
-
# check file type
|
278 |
-
assert filepath[-4:] == ".txt"
|
279 |
-
|
280 |
-
num_lines = sum(1 for _ in open(filepath, encoding="utf-8"))
|
281 |
-
id = 0
|
282 |
-
|
283 |
-
with open(filepath, "r", encoding="utf-8") as f:
|
284 |
-
tokens, ner_tags, fine_ner_tags = [], [], []
|
285 |
-
for line in tqdm(f, total=num_lines):
|
286 |
-
line = line.strip().split()
|
287 |
-
|
288 |
-
if line:
|
289 |
-
assert len(line) == 2
|
290 |
-
token, fine_ner_tag = line
|
291 |
-
ner_tag = fine_ner_tag.split("-")[0]
|
292 |
-
|
293 |
-
tokens.append(token)
|
294 |
-
ner_tags.append(NER_TAGS_DICT[ner_tag])
|
295 |
-
fine_ner_tags.append(FINE_NER_TAGS_DICT[fine_ner_tag])
|
296 |
-
|
297 |
-
elif tokens:
|
298 |
-
# organize a record to be written into json
|
299 |
-
record = {
|
300 |
-
"tokens": tokens,
|
301 |
-
"id": str(id),
|
302 |
-
"ner_tags": ner_tags,
|
303 |
-
"fine_ner_tags": fine_ner_tags,
|
304 |
-
}
|
305 |
-
tokens, ner_tags, fine_ner_tags = [], [], []
|
306 |
-
id += 1
|
307 |
-
yield record["id"], record
|
308 |
-
|
309 |
-
# take the last sentence
|
310 |
-
if tokens:
|
311 |
-
record = {
|
312 |
-
"tokens": tokens,
|
313 |
-
"id": str(id),
|
314 |
-
"ner_tags": ner_tags,
|
315 |
-
"fine_ner_tags": fine_ner_tags,
|
316 |
-
}
|
317 |
-
yield record["id"], record
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
inter/few-nerd-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01662fd19c0f018a80908c7a4acb2c7b3ef00bb363d84c65e811c48513dbce44
|
3 |
+
size 1556929
|
inter/few-nerd-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:464594261b14c1d0f469cff33dfb7b3558aa3534b33a083a2d5135d92b7640a9
|
3 |
+
size 16272950
|
inter/few-nerd-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e1aafa61c7bb10301de791622cfdb42026d15b638622ba074429f4229d86c3ca
|
3 |
+
size 2141578
|
intra/few-nerd-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1274ded7fe1a2aa59938ce88e72e31cc127afc91d2213ec2bdf787ecef0017f4
|
3 |
+
size 4757523
|
intra/few-nerd-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:37dd75e6bb4c0e0a30634aaf1a25a7c2ff906a0dafc95dd6a3dabdab27d8eab2
|
3 |
+
size 12716628
|
intra/few-nerd-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ba717cd911e5e4af44d220b6fdb02e66019d11dcc1c94bc176c5ca90e9b781ae
|
3 |
+
size 2198836
|
supervised/few-nerd-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9226b3c215a5ae5e1fd877857b46eb735d181b847c1a13ef3ac67297553e1b04
|
3 |
+
size 4836842
|
supervised/few-nerd-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:96e9c522dccadb8333f04dbed841ff513a637c8fa4a8553d48914a01f71fcd1c
|
3 |
+
size 16920044
|
supervised/few-nerd-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3e076e7da822a818a3fad0c8b2c29138015fa03c5cb2196a6ac1a50b59cd17e2
|
3 |
+
size 2430872
|