Datasets:
Commit
•
3af798a
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +172 -0
- dataset_infos.json +1 -0
- dummy/corpus/1.1.0/dummy_data.zip +3 -0
- spanish_billion_words.py +95 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- no-annotation
|
4 |
+
language_creators:
|
5 |
+
- expert-generated
|
6 |
+
languages:
|
7 |
+
- es
|
8 |
+
licenses:
|
9 |
+
- cc-by-sa-4-0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- n>1M
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- other
|
18 |
+
- sequence-modeling
|
19 |
+
task_ids:
|
20 |
+
- language-modeling
|
21 |
+
- other-other-pretraining-language-models
|
22 |
+
---
|
23 |
+
|
24 |
+
# Dataset Card for Spanish Billion Words
|
25 |
+
|
26 |
+
## Table of Contents
|
27 |
+
- [Dataset Description](#dataset-description)
|
28 |
+
- [Dataset Summary](#dataset-summary)
|
29 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
30 |
+
- [Languages](#languages)
|
31 |
+
- [Dataset Structure](#dataset-structure)
|
32 |
+
- [Data Instances](#data-instances)
|
33 |
+
- [Data Fields](#data-instances)
|
34 |
+
- [Data Splits](#data-instances)
|
35 |
+
- [Dataset Creation](#dataset-creation)
|
36 |
+
- [Curation Rationale](#curation-rationale)
|
37 |
+
- [Source Data](#source-data)
|
38 |
+
- [Annotations](#annotations)
|
39 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
40 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
41 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
42 |
+
- [Discussion of Biases](#discussion-of-biases)
|
43 |
+
- [Other Known Limitations](#other-known-limitations)
|
44 |
+
- [Additional Information](#additional-information)
|
45 |
+
- [Dataset Curators](#dataset-curators)
|
46 |
+
- [Licensing Information](#licensing-information)
|
47 |
+
- [Citation Information](#citation-information)
|
48 |
+
|
49 |
+
## Dataset Description
|
50 |
+
|
51 |
+
- **Homepage:** [Spanish Billion Words homepage](https://crscardellino.github.io/SBWCE/)
|
52 |
+
- **Point of Contact:** [Cristian Cardellino](mailto:ccardellino@unc.edu.ar) (Corpus Creator), [María Grandury](mailto:mariagrandury@gmail.com) (Corpus Submitter)
|
53 |
+
|
54 |
+
### Dataset Summary
|
55 |
+
|
56 |
+
The Spanish Billion Words Corpus is an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
|
57 |
+
This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,
|
58 |
+
the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.
|
59 |
+
|
60 |
+
This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.
|
61 |
+
|
62 |
+
### Supported Tasks and Leaderboards
|
63 |
+
|
64 |
+
This dataset can be used for language modelling and for pretraining language models.
|
65 |
+
|
66 |
+
### Languages
|
67 |
+
|
68 |
+
The text in this dataset is in Spanish, BCP-47 code: 'es'.
|
69 |
+
|
70 |
+
## Dataset Structure
|
71 |
+
|
72 |
+
### Data Instances
|
73 |
+
|
74 |
+
Each example in this dataset is a sentence in Spanish:
|
75 |
+
```
|
76 |
+
{'text': 'Yo me coloqué en un asiento próximo a una ventana cogí un libro de una mesa y empecé a leer'}
|
77 |
+
```
|
78 |
+
|
79 |
+
### Data Fields
|
80 |
+
|
81 |
+
- `text`: a sentence in Spanish
|
82 |
+
|
83 |
+
### Data Splits
|
84 |
+
|
85 |
+
The dataset is not split.
|
86 |
+
|
87 |
+
## Dataset Creation
|
88 |
+
|
89 |
+
### Curation Rationale
|
90 |
+
|
91 |
+
The Spanish Billion Words Corpus was created to train word embeddings using the word2vect algorithm provided by the gensim package.
|
92 |
+
|
93 |
+
### Source Data
|
94 |
+
|
95 |
+
#### Initial Data Collection and Normalization
|
96 |
+
|
97 |
+
The corpus was created compiling the following resources:
|
98 |
+
|
99 |
+
- The Spanish portion of [SenSem]().
|
100 |
+
- The Spanish portion of the [Ancora Corpus](http://clic.ub.edu/corpus/en).
|
101 |
+
- [Tibidabo Treebank and IULA Spanish LSP Treebank](http://lod.iula.upf.edu/resources/metadata_TRL_Tibidabo_LSP_treebank_ES).
|
102 |
+
- The Spanish portion of the following [OPUS Project](http://opus.nlpl.eu/index.php) Corpora:
|
103 |
+
- The [books](http://opus.nlpl.eu/Books.php) aligned by [Andras Farkas](https://farkastranslations.com/).
|
104 |
+
- The [JRC-Acquis](http://opus.nlpl.eu/JRC-Acquis.php) collection of legislative text of the European Union.
|
105 |
+
- The [News Commentary](http://opus.nlpl.eu/News-Commentary.php) corpus.
|
106 |
+
- The [United Nations](http://opus.nlpl.eu/UN.php) documents compiled by [Alexandre Rafalovitch](https://www.outerthoughts.com/) and [Robert Dale](http://web.science.mq.edu.au/~rdale/).
|
107 |
+
- The Spanish portion of the [Europarl](http://statmt.org/europarl/) (European Parliament), compiled by [Philipp Koehn](https://homepages.inf.ed.ac.uk/pkoehn/).
|
108 |
+
- Dumps from the Spanish [Wikipedia](https://es.wikipedia.org/wiki/Wikipedia:Portada), [Wikisource](https://es.wikisource.org/wiki/Portada) and [Wikibooks](https://es.wikibooks.org/wiki/Portada) on date 2015-09-01, parsed with the Wikipedia Extractor.
|
109 |
+
|
110 |
+
All the annotated corpora (like Ancora, SenSem and Tibidabo) were untagged and
|
111 |
+
the parallel corpora (most coming from the OPUS Project) was preprocessed to obtain only the Spanish portions of it.
|
112 |
+
|
113 |
+
Once the whole corpus was unannotated, all non-alphanumeric characters were replaced with whitespaces,
|
114 |
+
all numbers with the token “DIGITO” and all the multiple whitespaces with only one whitespace.
|
115 |
+
|
116 |
+
The capitalization of the words remained unchanged.
|
117 |
+
|
118 |
+
#### Who are the source language producers?
|
119 |
+
|
120 |
+
The data was compiled and processed by Cristian Cardellino.
|
121 |
+
|
122 |
+
### Annotations
|
123 |
+
|
124 |
+
The dataset is unannotated.
|
125 |
+
|
126 |
+
#### Annotation process
|
127 |
+
|
128 |
+
[N/A]
|
129 |
+
|
130 |
+
#### Who are the annotators?
|
131 |
+
|
132 |
+
[N/A]
|
133 |
+
|
134 |
+
### Personal and Sensitive Information
|
135 |
+
|
136 |
+
[More Information Needed]
|
137 |
+
|
138 |
+
## Considerations for Using the Data
|
139 |
+
|
140 |
+
### Social Impact of Dataset
|
141 |
+
|
142 |
+
[More Information Needed]
|
143 |
+
|
144 |
+
### Discussion of Biases
|
145 |
+
|
146 |
+
[More Information Needed]
|
147 |
+
|
148 |
+
### Other Known Limitations
|
149 |
+
|
150 |
+
[More Information Needed]
|
151 |
+
|
152 |
+
## Additional Information
|
153 |
+
|
154 |
+
### Dataset Curators
|
155 |
+
|
156 |
+
The data was collected and processed by Cristian Cardellino.
|
157 |
+
|
158 |
+
### Licensing Information
|
159 |
+
|
160 |
+
The dataset is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license
|
161 |
+
[(CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/)
|
162 |
+
|
163 |
+
### Citation Information
|
164 |
+
```
|
165 |
+
@misc{cardellinoSBWCE,
|
166 |
+
author = {Cardellino, Cristian},
|
167 |
+
title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
|
168 |
+
url = {https://crscardellino.github.io/SBWCE/},
|
169 |
+
month = {August},
|
170 |
+
year = {2019}
|
171 |
+
}
|
172 |
+
```
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"corpus": {"description": "An unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.\nThis resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,\nthe Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.\nThis corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.\n", "citation": "@misc{cardellinoSBWCE,\n author = {Cardellino, Cristian},\n title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},\n url = {https://crscardellino.github.io/SBWCE/},\n month = {August},\n year = {2019}\n}\n", "homepage": "https://crscardellino.github.io/SBWCE/", "license": "https://creativecommons.org/licenses/by-sa/4.0/", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "spanish_billion_words", "config_name": "corpus", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8950895954, "num_examples": 46925295, "dataset_name": "spanish_billion_words"}}, "download_checksums": {"http://cs.famaf.unc.edu.ar/~ccardellino/SBWCE/clean_corpus.tar.bz2": {"num_bytes": 2024166993, "checksum": "3c773dd179c72f8895aaca496af39c2c127bd45fb51b5dcbbf88e7fbc90943c5"}}, "download_size": 2024166993, "post_processing_size": null, "dataset_size": 8950895954, "size_in_bytes": 10975062947}}
|
dummy/corpus/1.1.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e0df634be3277be8e601418b490e27a0f93e1b9a41277f8411ec4acc82e659c5
|
3 |
+
size 1767
|
spanish_billion_words.py
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""The Spanish Billion Words Corpus."""
|
16 |
+
|
17 |
+
from __future__ import absolute_import, division, print_function
|
18 |
+
|
19 |
+
import os
|
20 |
+
|
21 |
+
import datasets
|
22 |
+
|
23 |
+
|
24 |
+
_CITATION = """\
|
25 |
+
@misc{cardellinoSBWCE,
|
26 |
+
author = {Cardellino, Cristian},
|
27 |
+
title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
|
28 |
+
url = {https://crscardellino.github.io/SBWCE/},
|
29 |
+
month = {August},
|
30 |
+
year = {2019}
|
31 |
+
}
|
32 |
+
"""
|
33 |
+
|
34 |
+
_DESCRIPTION = """\
|
35 |
+
An unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
|
36 |
+
This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,
|
37 |
+
the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.
|
38 |
+
This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.
|
39 |
+
"""
|
40 |
+
|
41 |
+
_HOMEPAGE = "https://crscardellino.github.io/SBWCE/"
|
42 |
+
|
43 |
+
_LICENSE = "https://creativecommons.org/licenses/by-sa/4.0/"
|
44 |
+
|
45 |
+
_URL = "http://cs.famaf.unc.edu.ar/~ccardellino/SBWCE/clean_corpus.tar.bz2"
|
46 |
+
|
47 |
+
|
48 |
+
class SpanishBillionWords(datasets.GeneratorBasedBuilder):
|
49 |
+
"""The Spanish Billion Words Corpus."""
|
50 |
+
|
51 |
+
VERSION = datasets.Version("1.1.0")
|
52 |
+
|
53 |
+
BUILDER_CONFIGS = [
|
54 |
+
datasets.BuilderConfig(
|
55 |
+
name="corpus",
|
56 |
+
version=VERSION,
|
57 |
+
description="100 text files where each line represents a sentence from the corpus",
|
58 |
+
),
|
59 |
+
]
|
60 |
+
|
61 |
+
def _info(self):
|
62 |
+
return datasets.DatasetInfo(
|
63 |
+
description=_DESCRIPTION,
|
64 |
+
features=datasets.Features(
|
65 |
+
{
|
66 |
+
"text": datasets.Value("string"),
|
67 |
+
}
|
68 |
+
),
|
69 |
+
supervised_keys=None,
|
70 |
+
homepage=_HOMEPAGE,
|
71 |
+
license=_LICENSE,
|
72 |
+
citation=_CITATION,
|
73 |
+
)
|
74 |
+
|
75 |
+
def _split_generators(self, dl_manager):
|
76 |
+
"""Returns SplitGenerators."""
|
77 |
+
data_dir = dl_manager.download_and_extract(_URL)
|
78 |
+
return [
|
79 |
+
datasets.SplitGenerator(
|
80 |
+
name=datasets.Split.TRAIN, gen_kwargs={"directory": os.path.join(data_dir, "spanish_billion_words")}
|
81 |
+
)
|
82 |
+
]
|
83 |
+
|
84 |
+
def _generate_examples(self, directory):
|
85 |
+
""" Yields examples. """
|
86 |
+
files = os.listdir(directory)
|
87 |
+
files = sorted(files)
|
88 |
+
_id = 0
|
89 |
+
|
90 |
+
for file in files:
|
91 |
+
file_path = os.path.join(directory, file)
|
92 |
+
with open(file_path, mode="r", encoding="utf-8") as f:
|
93 |
+
for line in f:
|
94 |
+
yield _id, {"text": line.strip()}
|
95 |
+
_id += 1
|