File size: 10,806 Bytes
1868d6b
 
 
 
c853895
 
1868d6b
 
 
 
 
 
 
 
 
 
 
 
 
c853895
 
 
1868d6b
c853895
 
1868d6b
 
 
c853895
 
f900926
 
 
 
 
 
 
1868d6b
 
f900926
1a055f0
6421348
f900926
 
 
 
 
 
fbc3139
f900926
6421348
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75162cc
6421348
 
 
75162cc
6421348
 
 
75162cc
6421348
 
 
 
 
 
 
 
 
f900926
 
 
 
6421348
f900926
 
 
6421348
f900926
 
 
6421348
f900926
 
 
 
bc0ed8e
 
f900926
6421348
f900926
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a364aca
f900926
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6421348
 
 
 
f900926
 
 
6421348
 
 
 
f900926
 
 
6421348
 
 
 
f900926
 
 
 
 
 
 
 
 
 
07e9e34
 
 
 
 
 
 
f900926
 
07e9e34
 
 
f900926
 
07e9e34
6421348
 
 
 
 
 
 
 
 
fef2018
 
6421348
 
 
 
 
 
fef2018
 
6421348
 
 
fef2018
6421348
 
fef2018
 
6421348
 
 
 
 
 
 
 
 
 
fef2018
6421348
 
fef2018
 
e9b8429
 
 
 
 
 
fef2018
91e9a7d
6421348
fef2018
 
6421348
 
 
 
 
 
 
 
 
fef2018
6421348
 
fef2018
 
e9b8429
 
 
 
 
 
 
 
 
 
fef2018
f900926
b53b7e9
f900926
985a1d8
 
 
 
 
 
 
 
 
f900926
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
---
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
dataset_info:
  features:
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence: int64
  - name: dataset
    dtype: string
  splits:
  - name: test
    num_bytes: 16147720
    num_examples: 42144
  - name: train
    num_bytes: 161576681
    num_examples: 349195
  - name: validation
    num_bytes: 12398792
    num_examples: 33464
  download_size: 43074463
  dataset_size: 190123193
task_categories:
- token-classification
language:
- fr
size_categories:
- 100K<n<1M
license: cc-by-4.0
---

# Dataset information 
**Dataset concatenating NER datasets, available in French and open-source, for 3 entities (LOC, PER, ORG).**   
There are a total of **420,264** rows, of which 346,071 are for training, 32,951 for validation and 41,242 for testing.  
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/).


#  Usage
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/frenchNER_3entities")
```


# Dataset
## Details of rows
| Dataset Original    | Splits  | Note  |
| ----------- | ----------- | ----------- |
| [Multiconer](https://huggingface.co/datasets/aashsach/multiconer2)| 16,548 train / 857 validation / 0 test  | In practice, we use the original validation set as test set<br> and creat a new val set from 5% of train created, i.e.<br> 15,721 train / 827 validation / 857 test|       
| [Multinerd](https://huggingface.co/datasets/Babelscape/multinerd)| 140,880 train /  17,610 val / 17,695 test |   |       
| [Pii-masking-200k](https://huggingface.co/datasets/ai4privacy/pii-masking-200k)| 61,958 train / 0 validation / 0 test  | Only dataset without duplicate data or leaks |       
| [Wikiann](https://huggingface.co/datasets/wikiann)| 20,000 train / 10,000 val / 10,000 test  |   |       
| [Wikiner](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr)| 120,682 train / 0 validation / 13,410 test | In practice, 5% of val created from train set, i.e.<br> 113,296 train / 5,994 validation / 13,393 test |


## Removing duplicate data and leaks
The sum of the values of the datasets listed here gives the following result:

```
DatasetDict({
    train: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 351855
    })
    validation: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 34431
    })
    test: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 41945
    })
})
```

However, a data item in training split A may not be in A's test split, but may be present in B's test set, creating a leak when we create the A+B dataset.  
The same logic applies to duplicate data. So we need to make sure we remove them.  
After our clean-up, we finally have the following numbers:

```
DatasetDict({
    train: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 346071
    })
    validation: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 32951
    })
    test: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 41242
    })
})
```

Note: in practice, the test split contains 8 lines which we failed to deduplicate, i.e. 0.019%.


### Details of entities (after cleaning)

<table>
<thead>
  <tr>
    <th><br>Datasets</th>
    <th><br>Splits</th>
    <th><br>O</th>
    <th><br>PER</th>
    <th><br>LOC</th>  
    <th><br>ORG</th>
  </tr>
</thead>
<tbody>
    <tr>
    <td rowspan="3"><br>Multiconer</td>
    <td><br>train</td>
    <td><br>200,093</td>
    <td><br>18,060</td>
    <td><br>7,165</td>
    <td><br>6,967</td>
  </tr>
  <tr>
    <td><br>validation</td>
    <td><br>10,900</td>
    <td><br>1,069</td>
    <td><br>389</td>
    <td><br>328</td>
  </tr>
  <tr>
    <td><br>test</td>
    <td><br>11,287</td>
    <td><br>979</td>
    <td><br>387</td>
    <td><br>381</td>
  </tr>
    <tr>
    <td rowspan="3"><br>Multinerd</td>
    <td><br>train</td>
    <td><br>3,041,998</td>
    <td><br>149,128</td>
    <td><br>105,531</td>
    <td><br>68,796</td>
  </tr>
  <tr>
    <td><br>validation</td>
    <td><br>410,934</td>
    <td><br>17,479</td>
    <td><br>13,988</td>
    <td><br>3,475</td>
  </tr>
  <tr>
    <td><br>test</td>
    <td><br>417,886</td>
    <td><br>18,567</td>
    <td><br>14,083</td>
    <td><br>3,636</td>
  </tr>
    <tr>
    <td rowspan="1"><br>Pii-masking-200k</td>
    <td><br>train</td>
    <td><br>2,405,215</td>
    <td><br>29,838</td>
    <td><br>42,154</td>
    <td><br>12,310</td>
  </tr>
    <tr>
    <td rowspan="3"><br>Wikiann</td>
    <td><br>train</td>
    <td><br>60,165</td>
    <td><br>20,288</td>
    <td><br>17,033</td>
    <td><br>24,429</td>
  </tr>
  <tr>
    <td><br>validation</td>
    <td><br>30,046</td>
    <td><br>10,098</td>
    <td><br>8,698</td>
    <td><br>12,819</td>
  </tr>
  <tr>
    <td><br>test</td>
    <td><br>31,488</td>
    <td><br>10,764</td>
    <td><br>9,512</td>
    <td><br>13,480</td>
  </tr>
  <tr>
    <td rowspan="3"><br>Wikiner</td>
    <td><br>train</td>
    <td><br>2,691,294</td>
    <td><br>110,079</td>
    <td><br>131,839</td>
    <td><br>38,988</td>
  </tr>
  <tr>
    <td><br>validation</td>
    <td><br>140,935</td>
    <td><br>5,481</td>
    <td><br>7,204</td>
    <td><br>2,121</td>
  </tr>
  <tr>
    <td><br>test</td>
    <td><br>313,210</td>
    <td><br>13,324</td>
    <td><br>15,213</td>
    <td><br>3,894</td>
  </tr>
  <tr>
    <td rowspan="3"><br>Total</td>
    <td><br>train</td>
    <td><br><b>8,398,765</b></td>
    <td><br><b>327,393</b></td>
    <td><br><b>303,722</b></td>
    <td><br><b>151,490</b></td>
  </tr>
  <tr>
    <td><br>validation</td>
    <td><br><b>592,815</b></td>
    <td><br><b>34,127</b></td>
    <td><br><b>30,279</b></td>
    <td><br><b>18,743</b></td>
  </tr>
  <tr>
    <td><br>test</td>
    <td><br><b>773,871</b></td>
    <td><br><b>43,634</b></td>
    <td><br><b>39,195</b></td>
    <td><br><b>21,391</b></td>
  </tr>
</tbody>
</table>


## Columns
```
dataset_train = dataset['train'].to_pandas()
dataset_train.head()

 	tokens 	                                            ner_tags 	                                        dataset
0 	[On, a, souvent, voulu, faire, de, La, Bruyère... 	[0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, ... 	wikiner
1 	[Les, améliorations, apportées, par, rapport, ... 	[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, ... 	wikiner
2 	[Cette, assemblée, de, notables, ,, réunie, en... 	[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, ... 	wikiner
3 	[Wittgenstein, projetait, en, effet, d', élabo... 	[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 	wikiner
4 	[Le, premier, écrivain, à, écrire, des, fictio... 	[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, ... 	wikiner

```

- the `tokens` column contains the tokens
- the `ner_tags` column contains the NER tags (IOB format with 0="O", 1="PER", 2="ORG" and 3="LOC")
- the `dataset` column identifies the row's original dataset (if you wish to apply filters to it)


## Split
- `train` corresponds to the concatenation of `multiconer` + `multinerd` + `pii-masking-200k` + `wikiann` + `wikiner` 
- `validation` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiann` + `wikiner` 
- `test` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiann` + `wikiner` 



# Citations

### multiconer
```
@inproceedings{multiconer2-report,  
    title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}},  
    author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin},  
    booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)},  
    year={2023},  
    publisher={Association for Computational Linguistics}}


@article{multiconer2-data,  
    title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}},  
    author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin},  
    year={2023}}
```

### multinerd
```
@inproceedings{tedeschi-navigli-2022-multinerd,  
    title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",  
    author = "Tedeschi, Simone and  Navigli, Roberto",  
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",  
    month = jul,  
    year = "2022",  
    address = "Seattle, United States",  
    publisher = "Association for Computational Linguistics",  
    url = "https://aclanthology.org/2022.findings-naacl.60",  
    doi = "10.18653/v1/2022.findings-naacl.60",  
    pages = "801--812"}
```

### pii-masking-200k
```
@misc {ai4privacy_2023,  
    author = { {ai4Privacy} },  
    title = { pii-masking-200k (Revision 1d4c0a1) },  
    year = 2023,  
    url = { https://huggingface.co/datasets/ai4privacy/pii-masking-200k },  
    doi = { 10.57967/hf/1532 },  
    publisher = { Hugging Face }}
```

### wikiann
```
@inproceedings{rahimi-etal-2019-massively,  
    title = "Massively Multilingual Transfer for {NER}",  
    author = "Rahimi, Afshin and Li, Yuan and Cohn, Trevor",  
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",  
    month = jul,  
    year = "2019",  
    address = "Florence, Italy",  
    publisher = "Association for Computational Linguistics",  
    url = "https://www.aclweb.org/anthology/P19-1015",  
    pages = "151--164"}
```

### wikiner
```
 @article{NOTHMAN2013151,  
    title = {Learning multilingual named entity recognition from Wikipedia},  
    journal = {Artificial Intelligence},  
    volume = {194},  
    pages = {151-175},  
    year = {2013},  
    note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources},  
    issn = {0004-3702},  
    doi = {https://doi.org/10.1016/j.artint.2012.03.006},  
    url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276},  
    author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}}
```

### frenchNER_3entities
```
@misc {frenchNER2024,  
    author       = { {BOURDOIS, Loïck} },  
    organization  = { {Centre Aquitain des Technologies de l'Information et Electroniques} },  
    title        = { frenchNER_3entities },  
    year         = 2024,  
    url          = { https://huggingface.co/CATIE-AQ/frenchNER_3entities },  
    doi          = { 10.57967/hf/1751 },  
    publisher    = { Hugging Face }  
}
```

# License
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en)