parquet-converter commited on
Commit
4829b73
1 Parent(s): 2b60fcb

Update parquet files

Browse files
Files changed (5) hide show
  1. .gitattributes +0 -27
  2. README.md +0 -173
  3. capes.py +0 -98
  4. dataset_infos.json +0 -1
  5. en-pt/capes-train.parquet +3 -0
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,173 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- - pt
9
- license:
10
- - unknown
11
- multilinguality:
12
- - multilingual
13
- size_categories:
14
- - 1M<n<10M
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - translation
19
- task_ids: []
20
- paperswithcode_id: capes
21
- pretty_name: CAPES
22
- tags:
23
- - dissertation-abstracts-translation
24
- - theses-translation
25
- dataset_info:
26
- features:
27
- - name: translation
28
- dtype:
29
- translation:
30
- languages:
31
- - en
32
- - pt
33
- config_name: en-pt
34
- splits:
35
- - name: train
36
- num_bytes: 472484364
37
- num_examples: 1157610
38
- download_size: 162229298
39
- dataset_size: 472484364
40
- ---
41
-
42
- # Dataset Card for CAPES
43
-
44
- ## Table of Contents
45
- - [Dataset Description](#dataset-description)
46
- - [Dataset Summary](#dataset-summary)
47
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
48
- - [Languages](#languages)
49
- - [Dataset Structure](#dataset-structure)
50
- - [Data Instances](#data-instances)
51
- - [Data Fields](#data-fields)
52
- - [Data Splits](#data-splits)
53
- - [Dataset Creation](#dataset-creation)
54
- - [Curation Rationale](#curation-rationale)
55
- - [Source Data](#source-data)
56
- - [Annotations](#annotations)
57
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
58
- - [Considerations for Using the Data](#considerations-for-using-the-data)
59
- - [Social Impact of Dataset](#social-impact-of-dataset)
60
- - [Discussion of Biases](#discussion-of-biases)
61
- - [Other Known Limitations](#other-known-limitations)
62
- - [Additional Information](#additional-information)
63
- - [Dataset Curators](#dataset-curators)
64
- - [Licensing Information](#licensing-information)
65
- - [Citation Information](#citation-information)
66
- - [Contributions](#contributions)
67
-
68
- ## Dataset Description
69
-
70
- - **Homepage:**[Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES](https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6)
71
- - **Repository:**
72
- - **Paper:**
73
- - **Leaderboard:**
74
- - **Point of Contact:**
75
-
76
- ### Dataset Summary
77
-
78
- A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the
79
- CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil.
80
- The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were
81
- collected and aligned using the Hunalign algorithm.
82
-
83
- ### Supported Tasks and Leaderboards
84
-
85
- The underlying task is machine translation.
86
-
87
- ### Languages
88
-
89
- [More Information Needed]
90
-
91
- ## Dataset Structure
92
-
93
- ### Data Instances
94
-
95
- [More Information Needed]
96
-
97
- ### Data Fields
98
-
99
- [More Information Needed]
100
-
101
- ### Data Splits
102
-
103
- [More Information Needed]
104
-
105
- ## Dataset Creation
106
-
107
- ### Curation Rationale
108
-
109
- [More Information Needed]
110
-
111
- ### Source Data
112
-
113
- #### Initial Data Collection and Normalization
114
-
115
- [More Information Needed]
116
-
117
- #### Who are the source language producers?
118
-
119
- [More Information Needed]
120
-
121
- ### Annotations
122
-
123
- #### Annotation process
124
-
125
- [More Information Needed]
126
-
127
- #### Who are the annotators?
128
-
129
- [More Information Needed]
130
-
131
- ### Personal and Sensitive Information
132
-
133
- [More Information Needed]
134
-
135
- ## Considerations for Using the Data
136
-
137
- ### Social Impact of Dataset
138
-
139
- [More Information Needed]
140
-
141
- ### Discussion of Biases
142
-
143
- [More Information Needed]
144
-
145
- ### Other Known Limitations
146
-
147
- [More Information Needed]
148
-
149
- ## Additional Information
150
-
151
- ### Dataset Curators
152
-
153
- [More Information Needed]
154
-
155
- ### Licensing Information
156
-
157
- [More Information Needed]
158
-
159
- ### Citation Information
160
-
161
- ```
162
- @inproceedings{soares2018parallel,
163
- title={A Parallel Corpus of Theses and Dissertations Abstracts},
164
- author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},
165
- booktitle={International Conference on Computational Processing of the Portuguese Language},
166
- pages={345--352},
167
- year={2018},
168
- organization={Springer}
169
- }
170
- ```
171
- ### Contributions
172
-
173
- Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
capes.py DELETED
@@ -1,98 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Capes: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES"""
16
-
17
-
18
- import datasets
19
-
20
-
21
- _CITATION = """\
22
- @inproceedings{soares2018parallel,
23
- title={A Parallel Corpus of Theses and Dissertations Abstracts},
24
- author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},
25
- booktitle={International Conference on Computational Processing of the Portuguese Language},
26
- pages={345--352},
27
- year={2018},
28
- organization={Springer}
29
- }
30
- """
31
-
32
-
33
- _DESCRIPTION = """\
34
- A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the \
35
- CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil. \
36
- The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were \
37
- collected and aligned using the Hunalign algorithm.
38
- """
39
-
40
-
41
- _HOMEPAGE = "https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6"
42
-
43
- _URL = "https://ndownloader.figstatic.com/files/14015837"
44
-
45
-
46
- class Capes(datasets.GeneratorBasedBuilder):
47
- """Capes: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES"""
48
-
49
- VERSION = datasets.Version("1.0.0")
50
-
51
- BUILDER_CONFIGS = [
52
- datasets.BuilderConfig(
53
- name="en-pt",
54
- version=datasets.Version("1.0.0"),
55
- description="Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES",
56
- )
57
- ]
58
-
59
- def _info(self):
60
- return datasets.DatasetInfo(
61
- description=_DESCRIPTION,
62
- features=datasets.Features(
63
- {"translation": datasets.features.Translation(languages=tuple(self.config.name.split("-")))}
64
- ),
65
- supervised_keys=None,
66
- homepage=_HOMEPAGE,
67
- citation=_CITATION,
68
- )
69
-
70
- def _split_generators(self, dl_manager):
71
- """Returns SplitGenerators."""
72
- archive = dl_manager.download(_URL)
73
- return [
74
- datasets.SplitGenerator(
75
- name=datasets.Split.TRAIN,
76
- gen_kwargs={
77
- "source_file": "en_pt.en",
78
- "target_file": "en_pt.pt",
79
- "src_files": dl_manager.iter_archive(archive),
80
- "tgt_files": dl_manager.iter_archive(archive),
81
- },
82
- ),
83
- ]
84
-
85
- def _generate_examples(self, source_file, target_file, src_files, tgt_files):
86
- source, target = tuple(self.config.name.split("-"))
87
- for src_path, src_f in src_files:
88
- if src_path == source_file:
89
- for tgt_path, tgt_f in tgt_files:
90
- if tgt_path == target_file:
91
- for idx, (l1, l2) in enumerate(zip(src_f, tgt_f)):
92
- l1 = l1.decode("utf-8").strip()
93
- l2 = l2.decode("utf-8").strip()
94
- if l1 and l2:
95
- result = {"translation": {source: l1, target: l2}}
96
- yield idx, result
97
- break
98
- break
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"en-pt": {"description": "A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the CAPES website (Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior) - Brazil. The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were collected and aligned using the Hunalign algorithm.\n", "citation": "@inproceedings{soares2018parallel,\n title={A Parallel Corpus of Theses and Dissertations Abstracts},\n author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},\n booktitle={International Conference on Computational Processing of the Portuguese Language},\n pages={345--352},\n year={2018},\n organization={Springer}\n}\n", "homepage": "https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6", "license": "", "features": {"translation": {"languages": ["en", "pt"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "capes", "config_name": "en-pt", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 472484364, "num_examples": 1157610, "dataset_name": "capes"}}, "download_checksums": {"https://ndownloader.figstatic.com/files/14015837": {"num_bytes": 162229298, "checksum": "08e5739e78cd5b68ca6b29507f2a746fd3a5fbdec8dde2700a4141030d21e143"}}, "download_size": 162229298, "post_processing_size": null, "dataset_size": 472484364, "size_in_bytes": 634713662}}
 
 
en-pt/capes-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8aa0a184e1ce16251cc88bc5cd3d542e008189c445c48e92a71fcb874cb37b9
3
+ size 285468019