Commit
•
87c8da2
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +145 -0
- dataset_infos.json +1 -0
- dummy/ttc4900/1.0.0/dummy_data.zip +3 -0
- ttc4900.py +121 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- found
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- tr
|
8 |
+
licenses:
|
9 |
+
- unknown
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 100k<n<1M
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- text_classification
|
18 |
+
task_ids:
|
19 |
+
- text_classification-other-news-category-classification
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for TTC4900: A Benchmark Data for Turkish Text Categorization
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-instances)
|
32 |
+
- [Data Splits](#data-instances)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Discussion of Social Impact and Biases](#discussion-of-social-impact-and-biases)
|
40 |
+
- [Other Known Limitations](#other-known-limitations)
|
41 |
+
- [Additional Information](#additional-information)
|
42 |
+
- [Dataset Curators](#dataset-curators)
|
43 |
+
- [Licensing Information](#licensing-information)
|
44 |
+
- [Citation Information](#citation-information)
|
45 |
+
|
46 |
+
## Dataset Description
|
47 |
+
|
48 |
+
- **Homepage:** [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900)
|
49 |
+
- **Point of Contact:** [ Avatar
|
50 |
+
Savaş Yıldırım](mailto:savasy@gmail.com)
|
51 |
+
|
52 |
+
### Dataset Summary
|
53 |
+
|
54 |
+
The data set is taken from [kemik group](http://www.kemik.yildiz.edu.tr/)
|
55 |
+
|
56 |
+
The data are pre-processed (noun phrase chunking etc.) for the text categorization problem by the study ["A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014"](https://link.springer.com/chapter/10.1007/978-3-642-54903-8_36)
|
57 |
+
|
58 |
+
### Languages
|
59 |
+
|
60 |
+
The dataset is based on Turkish.
|
61 |
+
|
62 |
+
## Dataset Structure
|
63 |
+
|
64 |
+
### Data Instances
|
65 |
+
|
66 |
+
A text classification dataset with 7 different news category.
|
67 |
+
|
68 |
+
Here is an example from the dataset:
|
69 |
+
|
70 |
+
```
|
71 |
+
{
|
72 |
+
"category": 0, # politics/siyaset
|
73 |
+
"text": "paris teki infaz imralı ile başlayan sürece bir darbe mi elif_çakır ın sunduğu söz_bitmeden in bugünkü konuğu gazeteci melih altınok oldu programdan satıbaşları imralı ile görüşmeler hangi aşamada bundan sonra ne olacak hangi kesimler sürece engel oluyor psikolojik mayınlar neler türk solu bu dönemde evrensel sorumluluğunu yerine getirebiliyor mu elif_çakır sordu melih altınok söz_bitmeden de yanıtladı elif_çakır pkk nın silahsızlandırılmasına yönelik olarak öcalan ile görüşme sonrası 3 kadının infazı enteresan çünkü kurucu isimlerden birisi sen nasıl okudun bu infazı melih altınok herkesin ciddi anlamda şüpheleri var şu an yürüttüğümüz herşey bir delile dayanmadığı için komple teorisinden ibaret kalacak ama şöyle bir durum var imralı görüşmelerin ilk defa bir siyasi iktidar tarafından açıkça söylendiği bir dönem ardından geliyor bu sürecin gerçekleşmemesini isteyen kesimler yaptırmıştır dedi"
|
74 |
+
}
|
75 |
+
```
|
76 |
+
|
77 |
+
|
78 |
+
### Data Fields
|
79 |
+
|
80 |
+
- **category** : Indicates to which category the news text belongs.
|
81 |
+
(Such as "politics", "world", "economy", "culture", "health", "sports", "technology".)
|
82 |
+
- **text** : Contains the text of the news.
|
83 |
+
|
84 |
+
### Data Splits
|
85 |
+
|
86 |
+
It is not divided into Train set and Test set.
|
87 |
+
|
88 |
+
## Dataset Creation
|
89 |
+
|
90 |
+
### Curation Rationale
|
91 |
+
|
92 |
+
[More Information Needed]
|
93 |
+
|
94 |
+
### Source Data
|
95 |
+
|
96 |
+
[More Information Needed]
|
97 |
+
|
98 |
+
#### Initial Data Collection and Normalization
|
99 |
+
|
100 |
+
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
|
101 |
+
|
102 |
+
|
103 |
+
#### Who are the source language producers?
|
104 |
+
|
105 |
+
Turkish online news sites.
|
106 |
+
|
107 |
+
### Annotations
|
108 |
+
|
109 |
+
The dataset does not contain any additional annotations.
|
110 |
+
|
111 |
+
#### Annotation process
|
112 |
+
|
113 |
+
[More Information Needed]
|
114 |
+
|
115 |
+
#### Who are the annotators?
|
116 |
+
|
117 |
+
[More Information Needed]
|
118 |
+
|
119 |
+
### Personal and Sensitive Information
|
120 |
+
|
121 |
+
[More Information Needed]
|
122 |
+
|
123 |
+
## Considerations for Using the Data
|
124 |
+
|
125 |
+
### Discussion of Social Impact and Biases
|
126 |
+
|
127 |
+
[More Information Needed]
|
128 |
+
|
129 |
+
### Other Known Limitations
|
130 |
+
|
131 |
+
[More Information Needed]
|
132 |
+
|
133 |
+
## Additional Information
|
134 |
+
|
135 |
+
### Dataset Curators
|
136 |
+
|
137 |
+
[More Information Needed]
|
138 |
+
|
139 |
+
### Licensing Information
|
140 |
+
|
141 |
+
[More Information Needed]
|
142 |
+
|
143 |
+
### Citation Information
|
144 |
+
|
145 |
+
[More Information Needed]
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"default": {"description": "The data set is taken from kemik group\nhttp://www.kemik.yildiz.edu.tr/\nThe data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.\nWe named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551\n", "citation": "", "homepage": "https://www.kaggle.com/savasy/ttc4900", "license": "CC0: Public Domain", "features": {"category": {"num_classes": 7, "names": ["siyaset", "dunya", "ekonomi", "kultur", "saglik", "spor", "teknoloji"], "names_file": null, "id": null, "_type": "ClassLabel"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tt_c4900", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10640831, "num_examples": 4900, "dataset_name": "tt_c4900"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 10640831, "size_in_bytes": 10640831}, "ttc4900": {"description": "The data set is taken from kemik group\nhttp://www.kemik.yildiz.edu.tr/\nThe data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.\nWe named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551\n", "citation": "", "homepage": "https://www.kaggle.com/savasy/ttc4900", "license": "CC0: Public Domain", "features": {"category": {"num_classes": 7, "names": ["siyaset", "dunya", "ekonomi", "kultur", "saglik", "spor", "teknoloji"], "names_file": null, "id": null, "_type": "ClassLabel"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tt_c4900", "config_name": "ttc4900", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10640831, "num_examples": 4900, "dataset_name": "tt_c4900"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 10640831, "size_in_bytes": 10640831}}
|
dummy/ttc4900/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:129948a1e8f21bcfc7ce6e012c36300abc5f9c8410fb9bddf89c2e5f85b2e9f6
|
3 |
+
size 13855
|
ttc4900.py
ADDED
@@ -0,0 +1,121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
# Lint as: python3
|
16 |
+
"""TTC4900: A Benchmark Data for Turkish Text Categorization"""
|
17 |
+
|
18 |
+
from __future__ import absolute_import, division, print_function
|
19 |
+
|
20 |
+
import csv
|
21 |
+
import logging
|
22 |
+
import os
|
23 |
+
|
24 |
+
import datasets
|
25 |
+
|
26 |
+
|
27 |
+
_DESCRIPTION = """\
|
28 |
+
The data set is taken from kemik group
|
29 |
+
http://www.kemik.yildiz.edu.tr/
|
30 |
+
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
|
31 |
+
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551
|
32 |
+
"""
|
33 |
+
|
34 |
+
_CITATION = ""
|
35 |
+
_LICENSE = "CC0: Public Domain"
|
36 |
+
_HOMEPAGE = "https://www.kaggle.com/savasy/ttc4900"
|
37 |
+
_FILENAME = "7allV03.csv"
|
38 |
+
|
39 |
+
|
40 |
+
class TTC4900Config(datasets.BuilderConfig):
|
41 |
+
"""BuilderConfig for TTC4900"""
|
42 |
+
|
43 |
+
def __init__(self, **kwargs):
|
44 |
+
"""BuilderConfig for TTC4900.
|
45 |
+
Args:
|
46 |
+
**kwargs: keyword arguments forwarded to super.
|
47 |
+
"""
|
48 |
+
super(TTC4900Config, self).__init__(**kwargs)
|
49 |
+
|
50 |
+
|
51 |
+
class TTC4900(datasets.GeneratorBasedBuilder):
|
52 |
+
"""TTC4900: A Benchmark Data for Turkish Text Categorization"""
|
53 |
+
|
54 |
+
BUILDER_CONFIGS = [
|
55 |
+
TTC4900Config(
|
56 |
+
name="ttc4900",
|
57 |
+
version=datasets.Version("1.0.0"),
|
58 |
+
description="A Benchmark Data for Turkish Text Categorization",
|
59 |
+
),
|
60 |
+
]
|
61 |
+
|
62 |
+
@property
|
63 |
+
def manual_download_instructions(self):
|
64 |
+
return """\
|
65 |
+
You need to go to https://www.kaggle.com/savasy/ttc4900,
|
66 |
+
and manually download the ttc4900. Once it is completed,
|
67 |
+
a file named archive.zip will be appeared in your Downloads folder
|
68 |
+
or whichever folder your browser chooses to save files to. You then have
|
69 |
+
to unzip the file and move 7allV03.csv under <path/to/folder>.
|
70 |
+
The <path/to/folder> can e.g. be "~/manual_data".
|
71 |
+
ttc4900 can then be loaded using the following command `datasets.load_dataset("ttc4900", data_dir="<path/to/folder>")`.
|
72 |
+
"""
|
73 |
+
|
74 |
+
def _info(self):
|
75 |
+
return datasets.DatasetInfo(
|
76 |
+
description=_DESCRIPTION,
|
77 |
+
features=datasets.Features(
|
78 |
+
{
|
79 |
+
"category": datasets.features.ClassLabel(
|
80 |
+
names=["siyaset", "dunya", "ekonomi", "kultur", "saglik", "spor", "teknoloji"]
|
81 |
+
),
|
82 |
+
"text": datasets.Value("string"),
|
83 |
+
}
|
84 |
+
),
|
85 |
+
supervised_keys=None,
|
86 |
+
# Homepage of the dataset for documentation
|
87 |
+
homepage=_HOMEPAGE,
|
88 |
+
# License for the dataset if available
|
89 |
+
license=_LICENSE,
|
90 |
+
# Citation for the dataset
|
91 |
+
citation=_CITATION,
|
92 |
+
)
|
93 |
+
|
94 |
+
def _split_generators(self, dl_manager):
|
95 |
+
"""Returns SplitGenerators."""
|
96 |
+
path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
|
97 |
+
if not os.path.exists(path_to_manual_file):
|
98 |
+
raise FileNotFoundError(
|
99 |
+
"{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('ttc4900', data_dir=...)` that includes a file name {}. Manual download instructions: {})".format(
|
100 |
+
path_to_manual_file, _FILENAME, self.manual_download_instructions
|
101 |
+
)
|
102 |
+
)
|
103 |
+
return [
|
104 |
+
datasets.SplitGenerator(
|
105 |
+
name=datasets.Split.TRAIN, gen_kwargs={"filepath": os.path.join(path_to_manual_file, _FILENAME)}
|
106 |
+
)
|
107 |
+
]
|
108 |
+
|
109 |
+
def _generate_examples(self, filepath):
|
110 |
+
"""Generate TTC4900 examples."""
|
111 |
+
logging.info("⏳ Generating examples from = %s", filepath)
|
112 |
+
with open(filepath, encoding="utf-8") as f:
|
113 |
+
rdr = csv.reader(f, delimiter=",")
|
114 |
+
next(rdr)
|
115 |
+
rownum = 0
|
116 |
+
for row in rdr:
|
117 |
+
rownum += 1
|
118 |
+
yield rownum, {
|
119 |
+
"category": row[0],
|
120 |
+
"text": row[1],
|
121 |
+
}
|