Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
25b1233
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-sa-3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - topic-classification
20
+ ---
21
+
22
+ # Dataset Card for DBpedia14
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [DBpedia14 homepage](https://wiki.dbpedia.org/develop/datasets)
50
+ - **Repository:** [DBpedia14 repository](https://github.com/dbpedia/extraction-framework)
51
+ - **Paper:** [DBpedia--a large-scale, multilingual knowledge base extracted from Wikipedia](https://content.iospress.com/articles/semantic-web/sw134)
52
+ - **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu)
53
+
54
+ ### Dataset Summary
55
+
56
+ The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes
57
+ from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we
58
+ randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size
59
+ of the training dataset is 560,000 and testing dataset 70,000.
60
+ There are 3 columns in the dataset (same for train and test splits), corresponding to class index
61
+ (1 to 14), title and content. The title and content are escaped using double quotes ("), and any
62
+ internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content.
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ - `text-classification`, `topic-classification`: The dataset is mainly used for text classification: given the content
67
+ and the title, predict the correct topic.
68
+
69
+ ### Languages
70
+
71
+ Although DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear
72
+ (e.g. a film whose title is origanlly not English).
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ A typical data point, comprises of a title, a content and the corresponding label.
79
+
80
+ An example from the DBpedia test set looks as follows:
81
+ ```
82
+ {
83
+ 'title':'',
84
+ 'content':" TY KU /taɪkuː/ is an American alcoholic beverage company that specializes in sake and other spirits. The privately-held company was founded in 2004 and is headquartered in New York City New York. While based in New York TY KU's beverages are made in Japan through a joint venture with two sake breweries. Since 2011 TY KU's growth has extended its products into all 50 states.",
85
+ 'label':0
86
+ }
87
+ ```
88
+
89
+ ### Data Fields
90
+
91
+ - 'title': a string containing the title of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes ("").
92
+ - 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes ("").
93
+ - 'label': one of the 14 possible topics.
94
+
95
+ ### Data Splits
96
+
97
+ The data is split into a training and test set.
98
+ For each of the 14 classes we have 40,000 training samples and 5,000 testing samples.
99
+ Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000.
100
+
101
+ ## Dataset Creation
102
+
103
+ ### Curation Rationale
104
+
105
+ The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
106
+
107
+ ### Source Data
108
+
109
+ #### Initial Data Collection and Normalization
110
+
111
+ [More Information Needed]
112
+
113
+ #### Who are the source language producers?
114
+
115
+ [More Information Needed]
116
+
117
+ ### Annotations
118
+
119
+ #### Annotation process
120
+
121
+ [More Information Needed]
122
+
123
+ #### Who are the annotators?
124
+
125
+ [More Information Needed]
126
+
127
+ ### Personal and Sensitive Information
128
+
129
+ [More Information Needed]
130
+
131
+ ## Considerations for Using the Data
132
+
133
+ ### Social Impact of Dataset
134
+
135
+ [More Information Needed]
136
+
137
+ ### Discussion of Biases
138
+
139
+ [More Information Needed]
140
+
141
+ ### Other Known Limitations
142
+
143
+ [More Information Needed]
144
+
145
+ ## Additional Information
146
+
147
+ ### Dataset Curators
148
+
149
+ The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
150
+
151
+ ### Licensing Information
152
+
153
+ The DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License.
154
+
155
+ ### Citation Information
156
+
157
+ Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
158
+
159
+ Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dbpedia_14": {"description": "The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes\nfrom DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we\nrandomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size\nof the training dataset is 560,000 and testing dataset 70,000.\nThere are 3 columns in the dataset (same for train and test splits), corresponding to class index\n(1 to 14), title and content. The title and content are escaped using double quotes (\"), and any\ninternal double quote is escaped by 2 double quotes (\"\"). There are no new lines in title or content.\n", "citation": "@article{lehmann2015dbpedia,\n title={DBpedia--a large-scale, multilingual knowledge base extracted from Wikipedia},\n author={Lehmann, Jens and Isele, Robert and Jakob, Max and Jentzsch, Anja and Kontokostas, \n Dimitris and Mendes, Pablo N and Hellmann, Sebastian and Morsey, Mohamed and Van Kleef, \n Patrick and Auer, S{\"o}ren and others},\n journal={Semantic web},\n volume={6},\n number={2},\n pages={167--195},\n year={2015},\n publisher={IOS Press}\n}\n", "homepage": "https://wiki.dbpedia.org/develop/datasets", "license": "Creative Commons Attribution-ShareAlike 3.0 and the GNU Free Documentation License", "features": {"label": {"num_classes": 14, "names": ["Company", "EducationalInstitution", "Artist", "Athlete", "OfficeHolder", "MeanOfTransportation", "Building", "NaturalPlace", "Village", "Animal", "Plant", "Album", "Film", "WrittenWork"], "names_file": null, "id": null, "_type": "ClassLabel"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "d_bpedia14", "config_name": "dbpedia_14", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 178429418, "num_examples": 560000, "dataset_name": "d_bpedia14"}, "test": {"name": "test", "num_bytes": 22310341, "num_examples": 70000, "dataset_name": "d_bpedia14"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k": {"num_bytes": 68341698, "checksum": "cad5773f85d7501bb2783833768bc624641cdddf7056000a06f12bcd0239a310"}}, "download_size": 68341698, "post_processing_size": null, "dataset_size": 200739759, "size_in_bytes": 269081457}}
dbpedia_14.py ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The DBpedia dataset for text classification."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ # TODO: Add BibTeX citation
26
+ # Find for instance the citation on arxiv or on the dataset repo/website
27
+ _CITATION = """\
28
+ @article{lehmann2015dbpedia,
29
+ title={DBpedia--a large-scale, multilingual knowledge base extracted from Wikipedia},
30
+ author={Lehmann, Jens and Isele, Robert and Jakob, Max and Jentzsch, Anja and Kontokostas,
31
+ Dimitris and Mendes, Pablo N and Hellmann, Sebastian and Morsey, Mohamed and Van Kleef,
32
+ Patrick and Auer, S{\"o}ren and others},
33
+ journal={Semantic web},
34
+ volume={6},
35
+ number={2},
36
+ pages={167--195},
37
+ year={2015},
38
+ publisher={IOS Press}
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes
44
+ from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we
45
+ randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size
46
+ of the training dataset is 560,000 and testing dataset 70,000.
47
+ There are 3 columns in the dataset (same for train and test splits), corresponding to class index
48
+ (1 to 14), title and content. The title and content are escaped using double quotes ("), and any
49
+ internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content.
50
+ """
51
+
52
+ _HOMEPAGE = "https://wiki.dbpedia.org/develop/datasets"
53
+
54
+ _LICENSE = "Creative Commons Attribution-ShareAlike 3.0 and the GNU Free Documentation License"
55
+
56
+ _URLs = {
57
+ "dbpedia_14": "https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k",
58
+ }
59
+
60
+
61
+ class DBpedia14Config(datasets.BuilderConfig):
62
+ """BuilderConfig for DBpedia."""
63
+
64
+ def __init__(self, **kwargs):
65
+ """BuilderConfig for DBpedia.
66
+
67
+ Args:
68
+ **kwargs: keyword arguments forwarded to super.
69
+ """
70
+ super(DBpedia14Config, self).__init__(**kwargs)
71
+
72
+
73
+ class DBpedia14(datasets.GeneratorBasedBuilder):
74
+ """DBpedia 2014 Ontology Classification Dataset."""
75
+
76
+ VERSION = datasets.Version("2.0.0")
77
+
78
+ BUILDER_CONFIGS = [
79
+ DBpedia14Config(
80
+ name="dbpedia_14", version=VERSION, description="DBpedia 2014 Ontology Classification Dataset."
81
+ ),
82
+ ]
83
+
84
+ def _info(self):
85
+ features = datasets.Features(
86
+ {
87
+ "label": datasets.features.ClassLabel(
88
+ names=[
89
+ "Company",
90
+ "EducationalInstitution",
91
+ "Artist",
92
+ "Athlete",
93
+ "OfficeHolder",
94
+ "MeanOfTransportation",
95
+ "Building",
96
+ "NaturalPlace",
97
+ "Village",
98
+ "Animal",
99
+ "Plant",
100
+ "Album",
101
+ "Film",
102
+ "WrittenWork",
103
+ ]
104
+ ),
105
+ "title": datasets.Value("string"),
106
+ "content": datasets.Value("string"),
107
+ }
108
+ )
109
+ return datasets.DatasetInfo(
110
+ description=_DESCRIPTION,
111
+ features=features,
112
+ supervised_keys=None,
113
+ homepage=_HOMEPAGE,
114
+ license=_LICENSE,
115
+ citation=_CITATION,
116
+ )
117
+
118
+ def _split_generators(self, dl_manager):
119
+ """Returns SplitGenerators."""
120
+ my_urls = _URLs[self.config.name]
121
+ data_dir = dl_manager.download_and_extract(my_urls)
122
+ return [
123
+ datasets.SplitGenerator(
124
+ name=datasets.Split.TRAIN,
125
+ gen_kwargs={
126
+ "filepath": os.path.join(data_dir, "dbpedia_csv/train.csv"),
127
+ "split": "train",
128
+ },
129
+ ),
130
+ datasets.SplitGenerator(
131
+ name=datasets.Split.TEST,
132
+ gen_kwargs={"filepath": os.path.join(data_dir, "dbpedia_csv/test.csv"), "split": "test"},
133
+ ),
134
+ ]
135
+
136
+ def _generate_examples(self, filepath, split):
137
+ """ Yields examples. """
138
+
139
+ with open(filepath, encoding="utf-8") as f:
140
+ data = csv.reader(f, delimiter=",", quoting=csv.QUOTE_NONNUMERIC)
141
+ for id_, row in enumerate(data):
142
+ yield id_, {
143
+ "title": row[1],
144
+ "content": row[2],
145
+ "label": int(row[0]) - 1,
146
+ }
dummy/dbpedia_14/2.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98d4ad02edaedc492459d94204bb149b82a55c2eb5bed0ad77b76ccbe387cd97
3
+ size 3319