Update files from the datasets library (from 1.12.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.12.0
- README.md +40 -14
- dataset_infos.json +1 -1
- ttc4900.py +32 -24
README.md
CHANGED
@@ -18,11 +18,13 @@ task_categories:
|
|
18 |
task_ids:
|
19 |
- text-classification-other-news-category-classification
|
20 |
paperswithcode_id: null
|
|
|
21 |
---
|
22 |
|
23 |
# Dataset Card for TTC4900: A Benchmark Data for Turkish Text Categorization
|
24 |
|
25 |
## Table of Contents
|
|
|
26 |
- [Dataset Description](#dataset-description)
|
27 |
- [Dataset Summary](#dataset-summary)
|
28 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
@@ -48,15 +50,26 @@ paperswithcode_id: null
|
|
48 |
|
49 |
## Dataset Description
|
50 |
|
51 |
-
- **Homepage:** [
|
52 |
-
- **
|
53 |
-
|
|
|
54 |
|
55 |
### Dataset Summary
|
56 |
|
57 |
The data set is taken from [kemik group](http://www.kemik.yildiz.edu.tr/)
|
|
|
|
|
58 |
|
59 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
### Languages
|
62 |
|
@@ -77,7 +90,6 @@ Here is an example from the dataset:
|
|
77 |
}
|
78 |
```
|
79 |
|
80 |
-
|
81 |
### Data Fields
|
82 |
|
83 |
- **category** : Indicates to which category the news text belongs.
|
@@ -96,21 +108,16 @@ It is not divided into Train set and Test set.
|
|
96 |
|
97 |
### Source Data
|
98 |
|
99 |
-
[More Information Needed]
|
100 |
-
|
101 |
#### Initial Data Collection and Normalization
|
102 |
|
103 |
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
|
104 |
|
105 |
-
|
106 |
#### Who are the source language producers?
|
107 |
|
108 |
Turkish online news sites.
|
109 |
|
110 |
### Annotations
|
111 |
|
112 |
-
The dataset does not contain any additional annotations.
|
113 |
-
|
114 |
#### Annotation process
|
115 |
|
116 |
[More Information Needed]
|
@@ -125,7 +132,11 @@ The dataset does not contain any additional annotations.
|
|
125 |
|
126 |
## Considerations for Using the Data
|
127 |
|
128 |
-
###
|
|
|
|
|
|
|
|
|
129 |
|
130 |
[More Information Needed]
|
131 |
|
@@ -137,7 +148,7 @@ The dataset does not contain any additional annotations.
|
|
137 |
|
138 |
### Dataset Curators
|
139 |
|
140 |
-
[
|
141 |
|
142 |
### Licensing Information
|
143 |
|
@@ -145,8 +156,23 @@ The dataset does not contain any additional annotations.
|
|
145 |
|
146 |
### Citation Information
|
147 |
|
148 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
149 |
|
150 |
### Contributions
|
151 |
|
152 |
-
Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
|
|
|
18 |
task_ids:
|
19 |
- text-classification-other-news-category-classification
|
20 |
paperswithcode_id: null
|
21 |
+
pretty_name: TTC4900 - A Benchmark Data for Turkish Text Categorization
|
22 |
---
|
23 |
|
24 |
# Dataset Card for TTC4900: A Benchmark Data for Turkish Text Categorization
|
25 |
|
26 |
## Table of Contents
|
27 |
+
- [Table of Contents](#table-of-contents)
|
28 |
- [Dataset Description](#dataset-description)
|
29 |
- [Dataset Summary](#dataset-summary)
|
30 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
|
|
50 |
|
51 |
## Dataset Description
|
52 |
|
53 |
+
- **Homepage:** [TTC4900 Homepage](https://www.kaggle.com/savasy/ttc4900)
|
54 |
+
- **Repository:** [TTC4900 Repository](https://github.com/savasy/TurkishTextClassification)
|
55 |
+
- **Paper:** [A Comparison of Different Approaches to Document Representation in Turkish Language](https://dergipark.org.tr/en/pub/sdufenbed/issue/38975/456349)
|
56 |
+
- **Point of Contact:** [Savaş Yıldırım](mailto:savasy@gmail.com)
|
57 |
|
58 |
### Dataset Summary
|
59 |
|
60 |
The data set is taken from [kemik group](http://www.kemik.yildiz.edu.tr/)
|
61 |
+
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
|
62 |
+
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study ["A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014"](https://link.springer.com/chapter/10.1007/978-3-642-54903-8_36)
|
63 |
|
64 |
+
If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows:
|
65 |
+
|
66 |
+
- A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018
|
67 |
+
- A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018
|
68 |
+
- A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014.
|
69 |
+
|
70 |
+
### Supported Tasks and Leaderboards
|
71 |
+
|
72 |
+
[More Information Needed]
|
73 |
|
74 |
### Languages
|
75 |
|
|
|
90 |
}
|
91 |
```
|
92 |
|
|
|
93 |
### Data Fields
|
94 |
|
95 |
- **category** : Indicates to which category the news text belongs.
|
|
|
108 |
|
109 |
### Source Data
|
110 |
|
|
|
|
|
111 |
#### Initial Data Collection and Normalization
|
112 |
|
113 |
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
|
114 |
|
|
|
115 |
#### Who are the source language producers?
|
116 |
|
117 |
Turkish online news sites.
|
118 |
|
119 |
### Annotations
|
120 |
|
|
|
|
|
121 |
#### Annotation process
|
122 |
|
123 |
[More Information Needed]
|
|
|
132 |
|
133 |
## Considerations for Using the Data
|
134 |
|
135 |
+
### Social Impact of Dataset
|
136 |
+
|
137 |
+
[More Information Needed]
|
138 |
+
|
139 |
+
### Discussion of Biases
|
140 |
|
141 |
[More Information Needed]
|
142 |
|
|
|
148 |
|
149 |
### Dataset Curators
|
150 |
|
151 |
+
The dataset was created by [Savaş Yıldırım](https://github.com/savasy)
|
152 |
|
153 |
### Licensing Information
|
154 |
|
|
|
156 |
|
157 |
### Citation Information
|
158 |
|
159 |
+
```
|
160 |
+
@article{doi:10.5505/pajes.2018.15931,
|
161 |
+
author = {Yıldırım, Savaş and Yıldız, Tuğba},
|
162 |
+
title = {A comparative analysis of text classification for Turkish language},
|
163 |
+
journal = {Pamukkale Univ Muh Bilim Derg},
|
164 |
+
volume = {24},
|
165 |
+
number = {5},
|
166 |
+
pages = {879-886},
|
167 |
+
year = {2018},
|
168 |
+
doi = {10.5505/pajes.2018.15931},
|
169 |
+
note ={doi: 10.5505/pajes.2018.15931},
|
170 |
+
|
171 |
+
URL = {https://dx.doi.org/10.5505/pajes.2018.15931},
|
172 |
+
eprint = {https://dx.doi.org/10.5505/pajes.2018.15931}
|
173 |
+
}
|
174 |
+
```
|
175 |
|
176 |
### Contributions
|
177 |
|
178 |
+
Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
|
dataset_infos.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"
|
|
|
1 |
+
{"ttc4900": {"description": "The data set is taken from kemik group\nhttp://www.kemik.yildiz.edu.tr/\nThe data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.\nWe named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551\n\nIf you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows:\n\n- A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018\n- A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018\n- A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014.\n", "citation": "@article{doi:10.5505/pajes.2018.15931,\nauthor = {Y\u0131ld\u0131r\u0131m, Sava\u015f and Y\u0131ld\u0131z, Tu\u011fba},\ntitle = {A comparative analysis of text classification for Turkish language},\njournal = {Pamukkale Univ Muh Bilim Derg},\nvolume = {24},\nnumber = {5},\npages = {879-886},\nyear = {2018},\ndoi = {10.5505/pajes.2018.15931},\nnote ={doi: 10.5505/pajes.2018.15931},\n\nURL = {https://dx.doi.org/10.5505/pajes.2018.15931},\neprint = {https://dx.doi.org/10.5505/pajes.2018.15931}\n}\n", "homepage": "https://www.kaggle.com/savasy/ttc4900", "license": "CC0: Public Domain", "features": {"category": {"num_classes": 7, "names": ["siyaset", "dunya", "ekonomi", "kultur", "saglik", "spor", "teknoloji"], "names_file": null, "id": null, "_type": "ClassLabel"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "text-classification", "text_column": "text", "label_column": "category", "labels": ["dunya", "ekonomi", "kultur", "saglik", "siyaset", "spor", "teknoloji"]}], "builder_name": "ttc4900", "config_name": "ttc4900", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10640831, "num_examples": 4900, "dataset_name": "ttc4900"}}, "download_checksums": {"https://raw.githubusercontent.com/savasy/TurkishTextClassification/master/7allV03.csv": {"num_bytes": 10627541, "checksum": "e17b79e89a3679ed77b3d5fd6d855fca43e9986a714cd4927c646c2be692c23e"}}, "download_size": 10627541, "post_processing_size": null, "dataset_size": 10640831, "size_in_bytes": 21268372}}
|
ttc4900.py
CHANGED
@@ -17,9 +17,9 @@
|
|
17 |
|
18 |
|
19 |
import csv
|
20 |
-
import os
|
21 |
|
22 |
import datasets
|
|
|
23 |
|
24 |
|
25 |
logger = datasets.logging.get_logger(__name__)
|
@@ -30,11 +30,34 @@ The data set is taken from kemik group
|
|
30 |
http://www.kemik.yildiz.edu.tr/
|
31 |
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
|
32 |
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
"""
|
34 |
|
35 |
-
_CITATION = ""
|
36 |
_LICENSE = "CC0: Public Domain"
|
37 |
_HOMEPAGE = "https://www.kaggle.com/savasy/ttc4900"
|
|
|
38 |
_FILENAME = "7allV03.csv"
|
39 |
|
40 |
|
@@ -60,18 +83,6 @@ class TTC4900(datasets.GeneratorBasedBuilder):
|
|
60 |
),
|
61 |
]
|
62 |
|
63 |
-
@property
|
64 |
-
def manual_download_instructions(self):
|
65 |
-
return """\
|
66 |
-
You need to go to https://www.kaggle.com/savasy/ttc4900,
|
67 |
-
and manually download the ttc4900. Once it is completed,
|
68 |
-
a file named archive.zip will be appeared in your Downloads folder
|
69 |
-
or whichever folder your browser chooses to save files to. You then have
|
70 |
-
to unzip the file and move 7allV03.csv under <path/to/folder>.
|
71 |
-
The <path/to/folder> can e.g. be "~/manual_data".
|
72 |
-
ttc4900 can then be loaded using the following command `datasets.load_dataset("ttc4900", data_dir="<path/to/folder>")`.
|
73 |
-
"""
|
74 |
-
|
75 |
def _info(self):
|
76 |
return datasets.DatasetInfo(
|
77 |
description=_DESCRIPTION,
|
@@ -90,21 +101,18 @@ class TTC4900(datasets.GeneratorBasedBuilder):
|
|
90 |
license=_LICENSE,
|
91 |
# Citation for the dataset
|
92 |
citation=_CITATION,
|
|
|
93 |
)
|
94 |
|
95 |
def _split_generators(self, dl_manager):
|
96 |
"""Returns SplitGenerators."""
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
)
|
103 |
-
)
|
104 |
return [
|
105 |
-
datasets.SplitGenerator(
|
106 |
-
name=datasets.Split.TRAIN, gen_kwargs={"filepath": os.path.join(path_to_manual_file, _FILENAME)}
|
107 |
-
)
|
108 |
]
|
109 |
|
110 |
def _generate_examples(self, filepath):
|
|
|
17 |
|
18 |
|
19 |
import csv
|
|
|
20 |
|
21 |
import datasets
|
22 |
+
from datasets.tasks import TextClassification
|
23 |
|
24 |
|
25 |
logger = datasets.logging.get_logger(__name__)
|
|
|
30 |
http://www.kemik.yildiz.edu.tr/
|
31 |
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
|
32 |
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551
|
33 |
+
|
34 |
+
If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows:
|
35 |
+
|
36 |
+
- A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018
|
37 |
+
- A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018
|
38 |
+
- A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014.
|
39 |
+
"""
|
40 |
+
|
41 |
+
_CITATION = """\
|
42 |
+
@article{doi:10.5505/pajes.2018.15931,
|
43 |
+
author = {Yıldırım, Savaş and Yıldız, Tuğba},
|
44 |
+
title = {A comparative analysis of text classification for Turkish language},
|
45 |
+
journal = {Pamukkale Univ Muh Bilim Derg},
|
46 |
+
volume = {24},
|
47 |
+
number = {5},
|
48 |
+
pages = {879-886},
|
49 |
+
year = {2018},
|
50 |
+
doi = {10.5505/pajes.2018.15931},
|
51 |
+
note ={doi: 10.5505/pajes.2018.15931},
|
52 |
+
|
53 |
+
URL = {https://dx.doi.org/10.5505/pajes.2018.15931},
|
54 |
+
eprint = {https://dx.doi.org/10.5505/pajes.2018.15931}
|
55 |
+
}
|
56 |
"""
|
57 |
|
|
|
58 |
_LICENSE = "CC0: Public Domain"
|
59 |
_HOMEPAGE = "https://www.kaggle.com/savasy/ttc4900"
|
60 |
+
_DOWNLOAD_URL = "https://raw.githubusercontent.com/savasy/TurkishTextClassification/master"
|
61 |
_FILENAME = "7allV03.csv"
|
62 |
|
63 |
|
|
|
83 |
),
|
84 |
]
|
85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
def _info(self):
|
87 |
return datasets.DatasetInfo(
|
88 |
description=_DESCRIPTION,
|
|
|
101 |
license=_LICENSE,
|
102 |
# Citation for the dataset
|
103 |
citation=_CITATION,
|
104 |
+
task_templates=[TextClassification(text_column="text", label_column="category")],
|
105 |
)
|
106 |
|
107 |
def _split_generators(self, dl_manager):
|
108 |
"""Returns SplitGenerators."""
|
109 |
+
|
110 |
+
urls_to_download = {
|
111 |
+
"train": _DOWNLOAD_URL + "/" + _FILENAME,
|
112 |
+
}
|
113 |
+
downloaded_files = dl_manager.download(urls_to_download)
|
|
|
|
|
114 |
return [
|
115 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
|
|
|
|
|
116 |
]
|
117 |
|
118 |
def _generate_examples(self, filepath):
|