The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for frwiki_good_pages_el
Dataset Summary
It is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities.
Languages
- English
Dataset Structure
{
"title": "Title of the page",
"qid": "QID of the corresponding Wikidata entity",
"words": ["tokens"],
"wikipedia": ["Wikipedia description of each entity"],
"labels": ["NER labels"],
"titles": ["Wikipedia title of each entity"],
"qids": ["QID of each entity"],
}
The words
field contains the article’s text splitted on white-spaces. The other fields are list with same length as words
and contains data only when the respective token in words
is the start of an entity. For instance, if the i-th token in words
is an entity, then the i-th element of wikipedia
contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.
The only exception is the labels
field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is "O"
; if it is the first word of a multi-word entity, the label is "B"
; otherwise the label is "I"
.
- Downloads last month
- 46