Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K<n<10K
License:
Update files from the datasets library (from 1.11.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.11.0
README.md
CHANGED
@@ -1,7 +1,24 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
languages:
|
3 |
- en
|
|
|
|
|
|
|
|
|
4 |
paperswithcode_id: wnut-2017-emerging-and-rare-entity
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
|
7 |
# Dataset Card for "wnut_17"
|
@@ -62,12 +79,8 @@ The goal of this task is to provide a definition of emerging and of rare entitie
|
|
62 |
|
63 |
## Dataset Structure
|
64 |
|
65 |
-
We show detailed information for up to 5 configurations of the dataset.
|
66 |
-
|
67 |
### Data Instances
|
68 |
|
69 |
-
#### wnut_17
|
70 |
-
|
71 |
- **Size of downloaded dataset files:** 0.76 MB
|
72 |
- **Size of the generated dataset:** 1.66 MB
|
73 |
- **Total amount of disk used:** 2.43 MB
|
@@ -83,18 +96,29 @@ An example of 'train' looks as follows.
|
|
83 |
|
84 |
### Data Fields
|
85 |
|
86 |
-
The data fields are the same among all splits
|
87 |
-
|
88 |
-
|
89 |
-
- `
|
90 |
-
-
|
91 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
### Data Splits
|
94 |
|
95 |
-
|
|
96 |
-
|
97 |
-
|
|
98 |
|
99 |
## Dataset Creation
|
100 |
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
languages:
|
7 |
- en
|
8 |
+
licenses:
|
9 |
+
- cc-by-4-0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
paperswithcode_id: wnut-2017-emerging-and-rare-entity
|
13 |
+
pretty_name: WNUT 17
|
14 |
+
size_categories:
|
15 |
+
- 1K<n<10K
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- structure-prediction
|
20 |
+
task_ids:
|
21 |
+
- named-entity-recognition
|
22 |
---
|
23 |
|
24 |
# Dataset Card for "wnut_17"
|
|
|
79 |
|
80 |
## Dataset Structure
|
81 |
|
|
|
|
|
82 |
### Data Instances
|
83 |
|
|
|
|
|
84 |
- **Size of downloaded dataset files:** 0.76 MB
|
85 |
- **Size of the generated dataset:** 1.66 MB
|
86 |
- **Total amount of disk used:** 2.43 MB
|
|
|
96 |
|
97 |
### Data Fields
|
98 |
|
99 |
+
The data fields are the same among all splits:
|
100 |
+
- `id` (`string`): ID of the example.
|
101 |
+
- `tokens` (`list` of `string`): Tokens of the example text.
|
102 |
+
- `ner_tags` (`list` of class labels): NER tags of the tokens (using IOB2 format), with possible values:
|
103 |
+
- 0: `O`
|
104 |
+
- 1: `B-corporation`
|
105 |
+
- 2: `I-corporation`
|
106 |
+
- 3: `B-creative-work`
|
107 |
+
- 4: `I-creative-work`
|
108 |
+
- 5: `B-group`
|
109 |
+
- 6: `I-group`
|
110 |
+
- 7: `B-location`
|
111 |
+
- 8: `I-location`
|
112 |
+
- 9: `B-person`
|
113 |
+
- 10: `I-person`
|
114 |
+
- 11: `B-product`
|
115 |
+
- 12: `I-product`
|
116 |
|
117 |
### Data Splits
|
118 |
|
119 |
+
|train|validation|test|
|
120 |
+
|----:|---------:|---:|
|
121 |
+
| 3394| 1009|1287|
|
122 |
|
123 |
## Dataset Creation
|
124 |
|