Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
topic-classification
Languages:
English
Size:
100K - 1M
ArXiv:
License:
annotations_creators: | |
- machine-generated | |
language_creators: | |
- crowdsourced | |
language: | |
- en | |
license: | |
- cc-by-sa-3.0 | |
multilinguality: | |
- monolingual | |
size_categories: | |
- 100K<n<1M | |
source_datasets: | |
- original | |
task_categories: | |
- text-classification | |
task_ids: | |
- topic-classification | |
paperswithcode_id: dbpedia | |
pretty_name: DBpedia | |
dataset_info: | |
config_name: dbpedia_14 | |
features: | |
- name: label | |
dtype: | |
class_label: | |
names: | |
'0': Company | |
'1': EducationalInstitution | |
'2': Artist | |
'3': Athlete | |
'4': OfficeHolder | |
'5': MeanOfTransportation | |
'6': Building | |
'7': NaturalPlace | |
'8': Village | |
'9': Animal | |
'10': Plant | |
'11': Album | |
'12': Film | |
'13': WrittenWork | |
- name: title | |
dtype: string | |
- name: content | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 178428970 | |
num_examples: 560000 | |
- name: test | |
num_bytes: 22310285 | |
num_examples: 70000 | |
download_size: 119424374 | |
dataset_size: 200739255 | |
configs: | |
- config_name: dbpedia_14 | |
data_files: | |
- split: train | |
path: dbpedia_14/train-* | |
- split: test | |
path: dbpedia_14/test-* | |
default: true | |
# Dataset Card for DBpedia14 | |
## Table of Contents | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
- [Languages](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [Data Fields](#data-fields) | |
- [Data Splits](#data-splits) | |
- [Dataset Creation](#dataset-creation) | |
- [Curation Rationale](#curation-rationale) | |
- [Source Data](#source-data) | |
- [Annotations](#annotations) | |
- [Personal and Sensitive Information](#personal-and-sensitive-information) | |
- [Considerations for Using the Data](#considerations-for-using-the-data) | |
- [Social Impact of Dataset](#social-impact-of-dataset) | |
- [Discussion of Biases](#discussion-of-biases) | |
- [Other Known Limitations](#other-known-limitations) | |
- [Additional Information](#additional-information) | |
- [Dataset Curators](#dataset-curators) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
- [Contributions](#contributions) | |
## Dataset Description | |
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Repository:** https://github.com/zhangxiangxiao/Crepe | |
- **Paper:** https://arxiv.org/abs/1509.01626 | |
- **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu) | |
### Dataset Summary | |
The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes | |
from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we | |
randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size | |
of the training dataset is 560,000 and testing dataset 70,000. | |
There are 3 columns in the dataset (same for train and test splits), corresponding to class index | |
(1 to 14), title and content. The title and content are escaped using double quotes ("), and any | |
internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content. | |
### Supported Tasks and Leaderboards | |
- `text-classification`, `topic-classification`: The dataset is mainly used for text classification: given the content | |
and the title, predict the correct topic. | |
### Languages | |
Although DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear | |
(e.g. a film whose title is origanlly not English). | |
## Dataset Structure | |
### Data Instances | |
A typical data point, comprises of a title, a content and the corresponding label. | |
An example from the DBpedia test set looks as follows: | |
``` | |
{ | |
'title':'', | |
'content':" TY KU /taɪkuː/ is an American alcoholic beverage company that specializes in sake and other spirits. The privately-held company was founded in 2004 and is headquartered in New York City New York. While based in New York TY KU's beverages are made in Japan through a joint venture with two sake breweries. Since 2011 TY KU's growth has extended its products into all 50 states.", | |
'label':0 | |
} | |
``` | |
### Data Fields | |
- 'title': a string containing the title of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). | |
- 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). | |
- 'label': one of the 14 possible topics. | |
### Data Splits | |
The data is split into a training and test set. | |
For each of the 14 classes we have 40,000 training samples and 5,000 testing samples. | |
Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000. | |
## Dataset Creation | |
### Curation Rationale | |
The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). | |
### Source Data | |
#### Initial Data Collection and Normalization | |
Source data is taken from DBpedia: https://wiki.dbpedia.org/develop/datasets | |
#### Who are the source language producers? | |
[More Information Needed] | |
### Annotations | |
#### Annotation process | |
[More Information Needed] | |
#### Who are the annotators? | |
[More Information Needed] | |
### Personal and Sensitive Information | |
[More Information Needed] | |
## Considerations for Using the Data | |
### Social Impact of Dataset | |
[More Information Needed] | |
### Discussion of Biases | |
[More Information Needed] | |
### Other Known Limitations | |
[More Information Needed] | |
## Additional Information | |
### Dataset Curators | |
The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). | |
### Licensing Information | |
The DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. | |
### Citation Information | |
``` | |
@inproceedings{NIPS2015_250cf8b5, | |
author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann}, | |
booktitle = {Advances in Neural Information Processing Systems}, | |
editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett}, | |
pages = {}, | |
publisher = {Curran Associates, Inc.}, | |
title = {Character-level Convolutional Networks for Text Classification}, | |
url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf}, | |
volume = {28}, | |
year = {2015} | |
} | |
``` | |
Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195. | |
### Contributions | |
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset. |