Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
json
Sub-tasks:
multi-class-classification
Languages:
English
Size:
< 1K
License:
parquet-converter
commited on
Commit
·
cdc0819
1
Parent(s):
98697a8
Update parquet files
Browse files- .gitattributes +0 -52
- hftrain_en.json → PlanTL-GOB-ES--WikiCAT_en/json-train.parquet +2 -2
- README.md +0 -157
- hfeval_en.json +0 -0
- wikicat_en.py +0 -88
.gitattributes
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
24 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
33 |
-
# Audio files - uncompressed
|
34 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
37 |
-
# Audio files - compressed
|
38 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
39 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
40 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
41 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
42 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
43 |
-
# Image files - uncompressed
|
44 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
46 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
47 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
48 |
-
# Image files - compressed
|
49 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
51 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
52 |
-
hftrain_en.json filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
hftrain_en.json → PlanTL-GOB-ES--WikiCAT_en/json-train.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a75d142c3ab287151c67b4e649116f7262477d7aa1b7728148ee52379b6dd859
|
3 |
+
size 15108661
|
README.md
DELETED
@@ -1,157 +0,0 @@
|
|
1 |
-
---
|
2 |
-
YAML tags:
|
3 |
-
annotations_creators:
|
4 |
-
- automatically-generated
|
5 |
-
language_creators:
|
6 |
-
- found
|
7 |
-
language:
|
8 |
-
- en
|
9 |
-
license:
|
10 |
-
- cc-by-sa-3.0
|
11 |
-
multilinguality:
|
12 |
-
- monolingual
|
13 |
-
pretty_name: wikicat_en
|
14 |
-
size_categories:
|
15 |
-
- unknown
|
16 |
-
source_datasets: []
|
17 |
-
task_categories:
|
18 |
-
- text-classification
|
19 |
-
task_ids:
|
20 |
-
- multi-class-classification
|
21 |
-
---
|
22 |
-
|
23 |
-
# WikiCAT_en (Text Classification) English dataset
|
24 |
-
|
25 |
-
|
26 |
-
## Dataset Description
|
27 |
-
|
28 |
-
- **Paper:**
|
29 |
-
|
30 |
-
- **Point of Contact:**
|
31 |
-
|
32 |
-
carlos.rodriguez1@bsc.es
|
33 |
-
|
34 |
-
|
35 |
-
**Repository**
|
36 |
-
|
37 |
-
https://github.com/TeMU-BSC/WikiCAT
|
38 |
-
|
39 |
-
### Dataset Summary
|
40 |
-
|
41 |
-
WikiCAT_en is a English corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 28921 article summaries from the Wikiipedia classified under 19 different categories.
|
42 |
-
|
43 |
-
This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
|
44 |
-
|
45 |
-
### Supported Tasks and Leaderboards
|
46 |
-
|
47 |
-
Text classification, Language Model
|
48 |
-
|
49 |
-
### Languages
|
50 |
-
|
51 |
-
EN - English
|
52 |
-
|
53 |
-
## Dataset Structure
|
54 |
-
|
55 |
-
### Data Instances
|
56 |
-
|
57 |
-
Two json files, one for each split.
|
58 |
-
|
59 |
-
### Data Fields
|
60 |
-
|
61 |
-
We used a simple model with the article text and associated labels, without further metadata.
|
62 |
-
|
63 |
-
#### Example:
|
64 |
-
|
65 |
-
<pre>
|
66 |
-
{"version": "1.1.0",
|
67 |
-
"data":
|
68 |
-
[
|
69 |
-
{
|
70 |
-
{'sentence': 'The IEEE Donald G. Fink Prize Paper Award was established in 1979 by the board of directors of the Institute of Electrical and Electronics Engineers (IEEE) in honor of Donald G. Fink. He was a past president of the Institute of Radio Engineers (IRE), and the first general manager and executive director of the IEEE. Recipients of this award received a certificate and an honorarium. The award was presented annually since 1981 and discontinued in 2016.', 'label': 'Engineering'
|
71 |
-
},
|
72 |
-
.
|
73 |
-
.
|
74 |
-
.
|
75 |
-
]
|
76 |
-
}
|
77 |
-
|
78 |
-
|
79 |
-
</pre>
|
80 |
-
|
81 |
-
#### Labels
|
82 |
-
|
83 |
-
'Health', 'Law', 'Entertainment', 'Religion', 'Business', 'Science', 'Engineering', 'Nature', 'Philosophy', 'Economy', 'Sports', 'Technology', 'Government', 'Mathematics', 'Military', 'Humanities', 'Music', 'Politics', 'History'
|
84 |
-
|
85 |
-
### Data Splits
|
86 |
-
|
87 |
-
* hftrain_en.json: 20237 label-document pairs
|
88 |
-
* hfeval_en.json: 8684 label-document pairs
|
89 |
-
|
90 |
-
|
91 |
-
## Dataset Creation
|
92 |
-
|
93 |
-
### Methodology
|
94 |
-
|
95 |
-
Se eligen páginas de partida “Category:” para representar los temas en cada lengua.
|
96 |
-
|
97 |
-
Se extrae para cada categoría las páginas principales, así como las subcategorías, y las páginas individuales bajo estas subcategorías de primer nivel.
|
98 |
-
Para cada página, se extrae también el “summary” que proporciona Wikipedia.
|
99 |
-
|
100 |
-
|
101 |
-
### Curation Rationale
|
102 |
-
|
103 |
-
|
104 |
-
### Source Data
|
105 |
-
|
106 |
-
#### Initial Data Collection and Normalization
|
107 |
-
|
108 |
-
The source data are Wikipedia page summaries and thematic categories
|
109 |
-
|
110 |
-
#### Who are the source language producers?
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
### Annotations
|
115 |
-
|
116 |
-
#### Annotation process
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
#### Who are the annotators?
|
121 |
-
|
122 |
-
Automatic annotation
|
123 |
-
|
124 |
-
### Personal and Sensitive Information
|
125 |
-
|
126 |
-
No personal or sensitive information included.
|
127 |
-
|
128 |
-
## Considerations for Using the Data
|
129 |
-
|
130 |
-
### Social Impact of Dataset
|
131 |
-
|
132 |
-
[N/A]
|
133 |
-
|
134 |
-
### Discussion of Biases
|
135 |
-
|
136 |
-
[N/A]
|
137 |
-
|
138 |
-
### Other Known Limitations
|
139 |
-
|
140 |
-
[N/A]
|
141 |
-
|
142 |
-
## Additional Information
|
143 |
-
|
144 |
-
### Dataset Curators
|
145 |
-
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
|
146 |
-
|
147 |
-
For further information, send an email to (plantl-gob-es@bsc.es).
|
148 |
-
|
149 |
-
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
|
150 |
-
|
151 |
-
### Licensing information
|
152 |
-
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
|
153 |
-
|
154 |
-
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
|
155 |
-
|
156 |
-
### Contributions
|
157 |
-
[N/A]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
hfeval_en.json
DELETED
The diff for this file is too large to render.
See raw diff
|
|
wikicat_en.py
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
# Loading script for the WikiCAT dataset.
|
2 |
-
import json
|
3 |
-
import datasets
|
4 |
-
|
5 |
-
logger = datasets.logging.get_logger(__name__)
|
6 |
-
|
7 |
-
_CITATION = """
|
8 |
-
|
9 |
-
"""
|
10 |
-
|
11 |
-
_DESCRIPTION = """
|
12 |
-
WikiCAT: Text Classification English dataset from the Viquipedia
|
13 |
-
|
14 |
-
"""
|
15 |
-
|
16 |
-
_HOMEPAGE = """ """
|
17 |
-
|
18 |
-
# TODO: upload datasets to github
|
19 |
-
_URL = "https://huggingface.co/datasets/crodri/wikicat_en/resolve/main/"
|
20 |
-
_TRAINING_FILE = "hftrain_en.json"
|
21 |
-
_DEV_FILE = "hfeval_en.json"
|
22 |
-
#_TEST_FILE = "test.json"
|
23 |
-
|
24 |
-
|
25 |
-
class wikicat_enConfig(datasets.BuilderConfig):
|
26 |
-
""" Builder config for the Topicat dataset """
|
27 |
-
|
28 |
-
def __init__(self, **kwargs):
|
29 |
-
"""BuilderConfig for wikicat_en.
|
30 |
-
Args:
|
31 |
-
**kwargs: keyword arguments forwarded to super.
|
32 |
-
"""
|
33 |
-
super(teclaConfig, self).__init__(**kwargs)
|
34 |
-
|
35 |
-
|
36 |
-
class wikicat_en(datasets.GeneratorBasedBuilder):
|
37 |
-
""" wikicat_en Dataset """
|
38 |
-
|
39 |
-
BUILDER_CONFIGS = [
|
40 |
-
wikicat_enConfig(
|
41 |
-
name="wikicat_en",
|
42 |
-
version=datasets.Version("1.1.0"),
|
43 |
-
description="wikicat_en",
|
44 |
-
),
|
45 |
-
]
|
46 |
-
|
47 |
-
def _info(self):
|
48 |
-
return datasets.DatasetInfo(
|
49 |
-
description=_DESCRIPTION,
|
50 |
-
features=datasets.Features(
|
51 |
-
{
|
52 |
-
"text": datasets.Value("string"),
|
53 |
-
"label": datasets.features.ClassLabel
|
54 |
-
(names= ['Health', 'Law', 'Entertainment', 'Religion', 'Business', 'Science', 'Engineering', 'Nature', 'Philosophy', 'Economy', 'Sports', 'Technology', 'Government', 'Mathematics', 'Military', 'Humanities', 'Music', 'Politics', 'History']
|
55 |
-
),
|
56 |
-
}
|
57 |
-
),
|
58 |
-
homepage=_HOMEPAGE,
|
59 |
-
citation=_CITATION,
|
60 |
-
)
|
61 |
-
|
62 |
-
def _split_generators(self, dl_manager):
|
63 |
-
"""Returns SplitGenerators."""
|
64 |
-
urls_to_download = {
|
65 |
-
"train": f"{_URL}{_TRAINING_FILE}",
|
66 |
-
"dev": f"{_URL}{_DEV_FILE}",
|
67 |
-
# "test": f"{_URL}{_TEST_FILE}",
|
68 |
-
}
|
69 |
-
downloaded_files = dl_manager.download_and_extract(urls_to_download)
|
70 |
-
|
71 |
-
return [
|
72 |
-
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
|
73 |
-
datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
|
74 |
-
# datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
|
75 |
-
]
|
76 |
-
|
77 |
-
def _generate_examples(self, filepath):
|
78 |
-
"""This function returns the examples in the raw (text) form."""
|
79 |
-
logger.info("generating examples from = %s", filepath)
|
80 |
-
with open(filepath, encoding="utf-8") as f:
|
81 |
-
wikicat_en = json.load(f)
|
82 |
-
for id_, article in enumerate(wikicat_en["data"]):
|
83 |
-
text = article["sentence"]
|
84 |
-
label = article["label"]
|
85 |
-
yield id_, {
|
86 |
-
"text": text,
|
87 |
-
"label": label,
|
88 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|