Commit
·
6421348
1
Parent(s):
eb4686b
Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,7 @@ license: cc-by-4.0
|
|
39 |
|
40 |
# Dataset information
|
41 |
Dataset concatenating all NER datasets, available in French and open-source, for 3 entities (LOC, PER, ORG).
|
42 |
-
There are a total of **
|
43 |
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/).
|
44 |
|
45 |
|
@@ -48,85 +48,62 @@ Our methodology is described in a blog post available in [English](https://blog.
|
|
48 |
from datasets import load_dataset
|
49 |
dataset = load_dataset("CATIE-AQ/frenchNER")
|
50 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
```
|
52 |
DatasetDict({
|
53 |
train: Dataset({
|
54 |
features: ['tokens', 'ner_tags', 'dataset'],
|
55 |
-
num_rows:
|
56 |
})
|
57 |
validation: Dataset({
|
58 |
features: ['tokens', 'ner_tags', 'dataset'],
|
59 |
-
num_rows:
|
60 |
})
|
61 |
test: Dataset({
|
62 |
features: ['tokens', 'ner_tags', 'dataset'],
|
63 |
-
num_rows:
|
64 |
})
|
65 |
})
|
66 |
```
|
67 |
|
68 |
|
69 |
-
|
70 |
-
## Dataset details
|
71 |
-
PARLER DE LA DEDUPLICATION DES DONNEES ET DES LEAKS (INDIVIDUEL PUIS GLOBAL)
|
72 |
-
|
73 |
-
### Détails lignes
|
74 |
-
| Dataset Original | Valeurs annoncées | Dataset Clean | Valeurs après Clean | Note |
|
75 |
-
| ----------- | ----------- | ----------- | ----------- | ----------- |
|
76 |
-
| [Mapa](https://huggingface.co/datasets/joelniklaus/mapa)| X train / X validation / X test | TODO | 1,259 train / 97 validation / 487 test | X |
|
77 |
-
| [Multiconer](https://huggingface.co/datasets/aashsach/multiconer2)| X train / X validation / X test | TODO | 15,538 train / 827 validation / 855 test | X |
|
78 |
-
| [Multinerd](https://huggingface.co/datasets/Babelscape/multinerd)| X train / X validation / X test | TODO | 137,917 train / 17,306 validation / 17,637 test | X |
|
79 |
-
| [Pii-masking-200k](https://huggingface.co/datasets/ai4privacy/pii-masking-200k)| 61,958 train / 0 validation / 0 test | TODO | 61,958 train / 0 validation / 0 test | No leak or duplicated data |
|
80 |
-
| [Redfm](https://huggingface.co/datasets/Babelscape/REDFM)| X train / X validation / X test | TODO | 1,865 train / 416 validation / 415 test | X |
|
81 |
-
| [Wikiann](https://huggingface.co/datasets/wikiann)| X train / X validation / X test | TODO | 17,362 train / 8,824 validation / 9,357 test | X |
|
82 |
-
| [Wikiner](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr)| 120,682 train / 0 validation / 13,410 test | TODO | 120,063 train / 0 validation / 13,393 test | En pratique, 5% de val créée, donc 113,296 train / 5,994 validation / 13,393 test |
|
83 |
-
|
84 |
-
Total :
|
85 |
-
288,309 train / 34,078 validation / 42,144 test
|
86 |
-
287,237 train / 33,464 validation / 42,144 test
|
87 |
-
Leaks et duplications (apparues après la concaténation : une donnée du split d'entraînement A peut ne pas être dans le split de test de A mais être présente dans le jeu de test de B et donc ça crée un leak dans le jeu de données A+B) :
|
88 |
-
leaks in train split: 1071
|
89 |
-
leaks in valsplit: 613
|
90 |
-
duplicate sentences in train dataset: 1839
|
91 |
-
duplicate sentences in val dataset: 55
|
92 |
-
duplicate sentences in test dataset: 8
|
93 |
-
|
94 |
-
Mapa
|
95 |
-
1297 train / 97 val / 490 test
|
96 |
-
APRES NETTOYAGE A FAIRE
|
97 |
-
|
98 |
-
Multiconer
|
99 |
-
16 548 train / 857 validation
|
100 |
-
16 364 train / 855 validation
|
101 |
-
leaks in train split (w.r.t val split): 13
|
102 |
-
duplicate sentences in train dataset: 186
|
103 |
-
duplicate sentences in validation dataset: 2
|
104 |
-
|
105 |
-
MULITNERD
|
106 |
-
140 880 train / 17 610 val / 17 695 test
|
107 |
-
138 221 train / 17 409 val / 17 637 test
|
108 |
-
leaks in train split: 69
|
109 |
-
leaks in val split: 20
|
110 |
-
duplicate sentences in train dataset: 2600
|
111 |
-
duplicate sentences in val dataset: 201
|
112 |
-
duplicate sentences in test dataset: 58
|
113 |
-
|
114 |
-
wikiann :
|
115 |
-
20 000 train / 10 000 val / 10 000 test
|
116 |
-
17 370 train / 9 300 val / 9 375 test
|
117 |
-
leaks in train split: 742
|
118 |
-
leaks in val split: 473
|
119 |
-
duplicate sentences in train dataset: 1889
|
120 |
-
duplicate sentences in val dataset: 700
|
121 |
-
duplicate sentences in test dataset: 644
|
122 |
-
|
123 |
-
|
124 |
-
wikiner:
|
125 |
-
leaks in train split: 23
|
126 |
-
duplicate sentences in train dataset: 599
|
127 |
-
duplicate sentences in test dataset: 17
|
128 |
-
|
129 |
-
### Détails entitées (après nettoyage)
|
130 |
|
131 |
<table>
|
132 |
<thead>
|
@@ -140,28 +117,6 @@ duplicate sentences in test dataset: 17
|
|
140 |
</tr>
|
141 |
</thead>
|
142 |
<tbody>
|
143 |
-
<tr>
|
144 |
-
<td rowspan="3"><br>Mapa</td>
|
145 |
-
<td><br>train</td>
|
146 |
-
<td><br>61,959</td>
|
147 |
-
<td><br>745</td>
|
148 |
-
<td><br>208</td>
|
149 |
-
<td><br>314</td>
|
150 |
-
</tr>
|
151 |
-
<tr>
|
152 |
-
<td><br>validation</td>
|
153 |
-
<td><br>7,826</td>
|
154 |
-
<td><br>51</td>
|
155 |
-
<td><br>24</td>
|
156 |
-
<td><br>78</td>
|
157 |
-
</tr>
|
158 |
-
<tr>
|
159 |
-
<td><br>test</td>
|
160 |
-
<td><br>21,981</td>
|
161 |
-
<td><br>121</td>
|
162 |
-
<td><br>32</td>
|
163 |
-
<td><br>298</td>
|
164 |
-
</tr>
|
165 |
<tr>
|
166 |
<td rowspan="3"><br>Multiconer</td>
|
167 |
<td><br>train</td>
|
@@ -213,28 +168,6 @@ duplicate sentences in test dataset: 17
|
|
213 |
<td><br>29,838</td>
|
214 |
<td><br>42,154</td>
|
215 |
<td><br>12,310</td>
|
216 |
-
</tr>
|
217 |
-
<tr>
|
218 |
-
<td rowspan="3"><br>Redfm</td>
|
219 |
-
<td><br>train</td>
|
220 |
-
<td><br>130,152</td>
|
221 |
-
<td><br>2,833</td>
|
222 |
-
<td><br>7,889</td>
|
223 |
-
<td><br>4,096</td>
|
224 |
-
</tr>
|
225 |
-
<tr>
|
226 |
-
<td><br>validation</td>
|
227 |
-
<td><br>23,133</td>
|
228 |
-
<td><br>859</td>
|
229 |
-
<td><br>757</td>
|
230 |
-
<td><br>729</td>
|
231 |
-
</tr>
|
232 |
-
<tr>
|
233 |
-
<td><br>test</td>
|
234 |
-
<td><br>22,951</td>
|
235 |
-
<td><br>675</td>
|
236 |
-
<td><br>930</td>
|
237 |
-
<td><br>708</td>
|
238 |
</tr>
|
239 |
<tr>
|
240 |
<td rowspan="3"><br>Wikiann</td>
|
@@ -283,24 +216,24 @@ duplicate sentences in test dataset: 17
|
|
283 |
<tr>
|
284 |
<td rowspan="3"><br>Total</td>
|
285 |
<td><br>train</td>
|
286 |
-
<td><br><b>8,
|
287 |
-
<td><br><b>
|
288 |
-
<td><br><b>
|
289 |
-
<td><br><b>
|
290 |
</tr>
|
291 |
<tr>
|
292 |
<td><br>validation</td>
|
293 |
-
<td><br><b>
|
294 |
-
<td><br><b>
|
295 |
-
<td><br><b>
|
296 |
-
<td><br><b>
|
297 |
</tr>
|
298 |
<tr>
|
299 |
<td><br>test</td>
|
300 |
-
<td><br><b>
|
301 |
-
<td><br><b>
|
302 |
-
<td><br><b>
|
303 |
-
<td><br><b>
|
304 |
</tr>
|
305 |
</tbody>
|
306 |
</table>
|
@@ -326,13 +259,75 @@ dataset_train.head()
|
|
326 |
|
327 |
|
328 |
## Split
|
329 |
-
- `train` corresponds to the concatenation of
|
330 |
-
- `validation` corresponds to the concatenation of
|
331 |
-
- `test` corresponds to the concatenation of
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
332 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
333 |
|
334 |
|
335 |
-
|
336 |
```
|
337 |
A GENERER
|
338 |
```
|
|
|
39 |
|
40 |
# Dataset information
|
41 |
Dataset concatenating all NER datasets, available in French and open-source, for 3 entities (LOC, PER, ORG).
|
42 |
+
There are a total of **420,264** rows, of which 346,071 are for training, 32,951 for validation and 41,242 for testing.
|
43 |
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/).
|
44 |
|
45 |
|
|
|
48 |
from datasets import load_dataset
|
49 |
dataset = load_dataset("CATIE-AQ/frenchNER")
|
50 |
```
|
51 |
+
|
52 |
+
|
53 |
+
# Dataset
|
54 |
+
## Details of rows
|
55 |
+
| Dataset Original | Splits | Note |
|
56 |
+
| ----------- | ----------- | ----------- |
|
57 |
+
| [Multiconer](https://huggingface.co/datasets/aashsach/multiconer2)| 16,548 train / 857 validation / 0 test | In practice, we use the original validation set as test set<br> and creat a new val set from 5% of train created, i.e.<br> 15,721 train / 827 validation / 857 test|
|
58 |
+
| [Multinerd](https://huggingface.co/datasets/Babelscape/multinerd)| 140,880 train / 17,610 val / 17,695 test | |
|
59 |
+
| [Pii-masking-200k](https://huggingface.co/datasets/ai4privacy/pii-masking-200k)| 61,958 train / 0 validation / 0 test | Only dataset without duplicate data or leaks |
|
60 |
+
| [Wikiann](https://huggingface.co/datasets/wikiann)| 20,000 train / 10,000 val / 10,000 test | |
|
61 |
+
| [Wikiner](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr)| 120,682 train / 0 validation / 13,410 test | In practice, 5% of val created from train set, i.e.<br> 113,296 train / 5,994 validation / 13,393 test |
|
62 |
+
|
63 |
+
|
64 |
+
## Removing duplicate data and leaks
|
65 |
+
The sum of the values of the datasets listed here gives the following result:
|
66 |
+
|
67 |
+
```
|
68 |
+
DatasetDict({
|
69 |
+
train: Dataset({
|
70 |
+
features: ['text', 'summary', 'dataset'],
|
71 |
+
num_rows: 351855
|
72 |
+
})
|
73 |
+
validation: Dataset({
|
74 |
+
features: ['text', 'summary', 'dataset'],
|
75 |
+
num_rows: 34431
|
76 |
+
})
|
77 |
+
test: Dataset({
|
78 |
+
features: ['text', 'summary', 'dataset'],
|
79 |
+
num_rows: 41945
|
80 |
+
})
|
81 |
+
})
|
82 |
+
```
|
83 |
+
|
84 |
+
However, a data item in training split A may not be in A's test split, but may be present in B's test set, creating a leak when we create the A+B dataset.
|
85 |
+
The same logic applies to duplicate data. So we need to make sure we remove them.
|
86 |
+
After our clean-up, we finally have the following numbers:
|
87 |
+
|
88 |
```
|
89 |
DatasetDict({
|
90 |
train: Dataset({
|
91 |
features: ['tokens', 'ner_tags', 'dataset'],
|
92 |
+
num_rows: 346071
|
93 |
})
|
94 |
validation: Dataset({
|
95 |
features: ['tokens', 'ner_tags', 'dataset'],
|
96 |
+
num_rows: 32951
|
97 |
})
|
98 |
test: Dataset({
|
99 |
features: ['tokens', 'ner_tags', 'dataset'],
|
100 |
+
num_rows: 41242
|
101 |
})
|
102 |
})
|
103 |
```
|
104 |
|
105 |
|
106 |
+
### Details of entities (after cleaning)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
107 |
|
108 |
<table>
|
109 |
<thead>
|
|
|
117 |
</tr>
|
118 |
</thead>
|
119 |
<tbody>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
<tr>
|
121 |
<td rowspan="3"><br>Multiconer</td>
|
122 |
<td><br>train</td>
|
|
|
168 |
<td><br>29,838</td>
|
169 |
<td><br>42,154</td>
|
170 |
<td><br>12,310</td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
171 |
</tr>
|
172 |
<tr>
|
173 |
<td rowspan="3"><br>Wikiann</td>
|
|
|
216 |
<tr>
|
217 |
<td rowspan="3"><br>Total</td>
|
218 |
<td><br>train</td>
|
219 |
+
<td><br><b>8,398,765</b></td>
|
220 |
+
<td><br><b>327,393</b></td>
|
221 |
+
<td><br><b>303,722</b></td>
|
222 |
+
<td><br><b>151,490</b></td>
|
223 |
</tr>
|
224 |
<tr>
|
225 |
<td><br>validation</td>
|
226 |
+
<td><br><b>592,815</b></td>
|
227 |
+
<td><br><b>34,127</b></td>
|
228 |
+
<td><br><b>30,279</b></td>
|
229 |
+
<td><br><b>18,743</b></td>
|
230 |
</tr>
|
231 |
<tr>
|
232 |
<td><br>test</td>
|
233 |
+
<td><br><b>773,871</b></td>
|
234 |
+
<td><br><b>43,634</b></td>
|
235 |
+
<td><br><b>39,195</b></td>
|
236 |
+
<td><br><b>21,391</b></td>
|
237 |
</tr>
|
238 |
</tbody>
|
239 |
</table>
|
|
|
259 |
|
260 |
|
261 |
## Split
|
262 |
+
- `train` corresponds to the concatenation of `multiconer` + `multinerd` + `pii-masking-200k` + `wikiann` + `wikiner`
|
263 |
+
- `validation` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiann` + `wikiner`
|
264 |
+
- `test` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiann` + `wikiner`
|
265 |
+
|
266 |
+
|
267 |
+
|
268 |
+
# Citations
|
269 |
+
|
270 |
+
### multiconer
|
271 |
+
|
272 |
+
> @inproceedings{multiconer2-report,
|
273 |
+
title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}},
|
274 |
+
author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin},
|
275 |
+
booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)},
|
276 |
+
year={2023},
|
277 |
+
publisher={Association for Computational Linguistics}}
|
278 |
+
|
279 |
+
> @article{multiconer2-data,
|
280 |
+
title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}},
|
281 |
+
author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin},
|
282 |
+
year={2023}}
|
283 |
+
|
284 |
+
|
285 |
+
### multinerd
|
286 |
+
|
287 |
+
> @inproceedings{tedeschi-navigli-2022-multinerd,
|
288 |
+
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
|
289 |
+
author = "Tedeschi, Simone and Navigli, Roberto",
|
290 |
+
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
|
291 |
+
month = jul,
|
292 |
+
year = "2022",
|
293 |
+
address = "Seattle, United States",
|
294 |
+
publisher = "Association for Computational Linguistics",
|
295 |
+
url = "https://aclanthology.org/2022.findings-naacl.60",
|
296 |
+
doi = "10.18653/v1/2022.findings-naacl.60",
|
297 |
+
pages = "801--812"}
|
298 |
+
|
299 |
+
|
300 |
+
### pii-masking-200k
|
301 |
+
|
302 |
+
### wikiann
|
303 |
+
|
304 |
+
> @inproceedings{rahimi-etal-2019-massively,
|
305 |
+
title = "Massively Multilingual Transfer for {NER}",
|
306 |
+
author = "Rahimi, Afshin and Li, Yuan and Cohn, Trevor",
|
307 |
+
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
|
308 |
+
month = jul,
|
309 |
+
year = "2019",
|
310 |
+
address = "Florence, Italy",
|
311 |
+
publisher = "Association for Computational Linguistics",
|
312 |
+
url = "https://www.aclweb.org/anthology/P19-1015",
|
313 |
+
pages = "151--164"}
|
314 |
+
|
315 |
+
### wikiner
|
316 |
|
317 |
+
> @article{NOTHMAN2013151,
|
318 |
+
title = {Learning multilingual named entity recognition from Wikipedia},
|
319 |
+
journal = {Artificial Intelligence},
|
320 |
+
volume = {194},
|
321 |
+
pages = {151-175},
|
322 |
+
year = {2013},
|
323 |
+
note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources},
|
324 |
+
issn = {0004-3702},
|
325 |
+
doi = {https://doi.org/10.1016/j.artint.2012.03.006},
|
326 |
+
url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276},
|
327 |
+
author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}}
|
328 |
|
329 |
|
330 |
+
### frenchNER
|
331 |
```
|
332 |
A GENERER
|
333 |
```
|