Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Catalan
ArXiv:
Libraries:
Datasets
Dask
License:
albertvillanova HF staff commited on
Commit
4e34142
1 Parent(s): 0c60afc

Add dataset script and card

Browse files
Files changed (2) hide show
  1. README.md +153 -0
  2. catalan_general_crawling.py +78 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ca
8
+ licenses:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Catalan General Crawling
13
+ size_categories:
14
+ - 1M<n<10M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - sequence-modeling
19
+ task_ids:
20
+ - language-modeling
21
+ ---
22
+
23
+ # Dataset Card for Catalan General Crawling
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** https://zenodo.org/record/5483031#.YapO3boo9PY
53
+ - **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
54
+ - **Point of Contact:**
55
+
56
+ ### Dataset Summary
57
+
58
+ The Catalan General Crawling Corpus is a 435-million-token web corpus of Catalan built from the web. It has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020. It consists of 434.817.705 tokens, 19.451.691 sentences and 1.016.114 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ [More Information Needed]
63
+
64
+ ### Languages
65
+
66
+ The dataset is in Catalan (`ca`).
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ ```
73
+ {'text': "L'operatiu continuarà durant aquest divendres."}
74
+ ```
75
+
76
+ ### Data Fields
77
+
78
+ - `text` (str): Text.
79
+
80
+ ### Data Splits
81
+
82
+ The dataset contains a single split: "train".
83
+
84
+ ## Dataset Creation
85
+
86
+ ### Curation Rationale
87
+
88
+ [More Information Needed]
89
+
90
+ ### Source Data
91
+
92
+ #### Initial Data Collection and Normalization
93
+
94
+ [More Information Needed]
95
+
96
+ #### Who are the source language producers?
97
+
98
+ [More Information Needed]
99
+
100
+ ### Annotations
101
+
102
+ #### Annotation process
103
+
104
+ [More Information Needed]
105
+
106
+ #### Who are the annotators?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Personal and Sensitive Information
111
+
112
+ [More Information Needed]
113
+
114
+ ## Considerations for Using the Data
115
+
116
+ ### Social Impact of Dataset
117
+
118
+ [More Information Needed]
119
+
120
+ ### Discussion of Biases
121
+
122
+ [More Information Needed]
123
+
124
+ ### Other Known Limitations
125
+
126
+ [More Information Needed]
127
+
128
+ ## Additional Information
129
+
130
+ ### Dataset Curators
131
+
132
+ [More Information Needed]
133
+
134
+ ### Licensing Information
135
+
136
+ [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
137
+
138
+ ### Citation Information
139
+
140
+ ```
141
+ @misc{armengolestape2021multilingual,
142
+ title={Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan},
143
+ author={Jordi Armengol{-}Estap{\'{e}} and Casimiro Pio Carrino and Carlos Rodriguez-Penagos and Ona de Gibert Bonet and Carme Armentano{-}Oller and Aitor Gonzalez{-}Agirre and Maite Melero and Marta Villegas},
144
+ year={2021},
145
+ eprint={2107.07903},
146
+ archivePrefix={arXiv},
147
+ primaryClass={cs.CL}
148
+ }
149
+ ```
150
+
151
+ ### Contributions
152
+
153
+ Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
catalan_general_crawling.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Catalan General Crawling."""
16
+
17
+ import os
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @misc{armengolestape2021multilingual,
24
+ title={Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan},
25
+ author={Jordi Armengol{-}Estap{\'{e}} and Casimiro Pio Carrino and Carlos Rodriguez-Penagos and Ona de Gibert Bonet and Carme Armentano{-}Oller and Aitor Gonzalez{-}Agirre and Maite Melero and Marta Villegas},
26
+ year={2021},
27
+ eprint={2107.07903},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CL}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ The Catalan General Crawling Corpus is a 435-million-token web corpus of Catalan built from the web. It has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020. It consists of 434.817.705 tokens, 19.451.691 sentences and 1.016.114 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
35
+ """
36
+
37
+ _HOMEPAGE = "https://zenodo.org/record/5483031#.YapO3boo9PY"
38
+
39
+ _LICENSE = "Creative Commons Attribution 4.0 International License"
40
+
41
+ _URL = "https://zenodo.org/record/5483031/files/catalan_general_crawling.zip?download=1"
42
+
43
+
44
+ class CatalanGeneralCrawling(datasets.GeneratorBasedBuilder):
45
+ """Catalan General Crawling."""
46
+
47
+ VERSION = datasets.Version("1.0.0")
48
+
49
+ def _info(self):
50
+ return datasets.DatasetInfo(
51
+ description=_DESCRIPTION,
52
+ features=datasets.Features({"text": datasets.Value("string")}),
53
+ supervised_keys=None,
54
+ homepage=_HOMEPAGE,
55
+ license=_LICENSE,
56
+ citation=_CITATION,
57
+ )
58
+
59
+ def _split_generators(self, dl_manager):
60
+ data_dir = dl_manager.download_and_extract(_URL)
61
+ return [
62
+ datasets.SplitGenerator(
63
+ name=datasets.Split.TRAIN,
64
+ gen_kwargs={
65
+ "filepath": os.path.join(data_dir, "corpus", "catalan_general_crawling.txt"),
66
+ },
67
+ ),
68
+ ]
69
+
70
+ def _generate_examples(self, filepath):
71
+ with open(filepath, encoding="utf-8") as f:
72
+ text = ""
73
+ for id_, line in enumerate(f):
74
+ if line == "\n":
75
+ yield id_, {"text": text.strip()}
76
+ text = ""
77
+ else:
78
+ text += line