Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
topic-classification
Languages:
English
Size:
100K - 1M
ArXiv:
License:
Fix dataset card
#4
by
albertvillanova
HF staff
- opened
README.md
CHANGED
@@ -91,9 +91,9 @@ configs:
|
|
91 |
|
92 |
## Dataset Description
|
93 |
|
94 |
-
- **Homepage:** [
|
95 |
-
- **Repository:**
|
96 |
-
- **Paper:**
|
97 |
- **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu)
|
98 |
|
99 |
### Dataset Summary
|
@@ -153,7 +153,7 @@ The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang
|
|
153 |
|
154 |
#### Initial Data Collection and Normalization
|
155 |
|
156 |
-
|
157 |
|
158 |
#### Who are the source language producers?
|
159 |
|
@@ -199,9 +199,22 @@ The DBPedia ontology classification dataset is licensed under the terms of the C
|
|
199 |
|
200 |
### Citation Information
|
201 |
|
202 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
203 |
|
204 |
Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
|
|
|
205 |
### Contributions
|
206 |
|
207 |
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
|
|
|
91 |
|
92 |
## Dataset Description
|
93 |
|
94 |
+
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
95 |
+
- **Repository:** https://github.com/zhangxiangxiao/Crepe
|
96 |
+
- **Paper:** https://arxiv.org/abs/1509.01626
|
97 |
- **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu)
|
98 |
|
99 |
### Dataset Summary
|
|
|
153 |
|
154 |
#### Initial Data Collection and Normalization
|
155 |
|
156 |
+
Source data is taken from DBpedia: https://wiki.dbpedia.org/develop/datasets
|
157 |
|
158 |
#### Who are the source language producers?
|
159 |
|
|
|
199 |
|
200 |
### Citation Information
|
201 |
|
202 |
+
```
|
203 |
+
@inproceedings{NIPS2015_250cf8b5,
|
204 |
+
author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
|
205 |
+
booktitle = {Advances in Neural Information Processing Systems},
|
206 |
+
editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett},
|
207 |
+
pages = {},
|
208 |
+
publisher = {Curran Associates, Inc.},
|
209 |
+
title = {Character-level Convolutional Networks for Text Classification},
|
210 |
+
url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf},
|
211 |
+
volume = {28},
|
212 |
+
year = {2015}
|
213 |
+
}
|
214 |
+
```
|
215 |
|
216 |
Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
|
217 |
+
|
218 |
### Contributions
|
219 |
|
220 |
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
|