Datasets:

Modalities:
Text
Languages:
Persian
ArXiv:
Libraries:
Datasets
License:
parquet-converter commited on
Commit
0adecf3
1 Parent(s): aac51e2

Update parquet files

Browse files
.gitattributes CHANGED
@@ -14,3 +14,5 @@
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
 
 
 
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
17
+ parsinlu-repo/parsinlu_translation_en_fa-train.parquet filter=lfs diff=lfs merge=lfs -text
18
+ parsinlu-repo/parsinlu_translation_en_fa-test.parquet filter=lfs diff=lfs merge=lfs -text
README.md DELETED
@@ -1,165 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - fa
8
- license:
9
- - cc-by-nc-sa-4.0
10
- multilinguality:
11
- - fa
12
- - en
13
- size_categories:
14
- - 1K<n<10K
15
- source_datasets:
16
- - extended
17
- task_categories:
18
- - translation
19
- task_ids:
20
- - translation
21
- ---
22
-
23
- # Dataset Card for PersiNLU (Machine Translation)
24
-
25
- ## Table of Contents
26
- - [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
27
- - [Table of Contents](#table-of-contents)
28
- - [Dataset Description](#dataset-description)
29
- - [Dataset Summary](#dataset-summary)
30
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
- - [Languages](#languages)
32
- - [Dataset Structure](#dataset-structure)
33
- - [Data Instances](#data-instances)
34
- - [Data Fields](#data-fields)
35
- - [Data Splits](#data-splits)
36
- - [Dataset Creation](#dataset-creation)
37
- - [Curation Rationale](#curation-rationale)
38
- - [Source Data](#source-data)
39
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
- - [Who are the source language producers?](#who-are-the-source-language-producers)
41
- - [Annotations](#annotations)
42
- - [Annotation process](#annotation-process)
43
- - [Who are the annotators?](#who-are-the-annotators)
44
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
- - [Considerations for Using the Data](#considerations-for-using-the-data)
46
- - [Social Impact of Dataset](#social-impact-of-dataset)
47
- - [Discussion of Biases](#discussion-of-biases)
48
- - [Other Known Limitations](#other-known-limitations)
49
- - [Additional Information](#additional-information)
50
- - [Dataset Curators](#dataset-curators)
51
- - [Licensing Information](#licensing-information)
52
- - [Citation Information](#citation-information)
53
- - [Contributions](#contributions)
54
-
55
- ## Dataset Description
56
-
57
- - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
58
- - **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
59
- - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
60
- - **Leaderboard:**
61
- - **Point of Contact:** d.khashabi@gmail.com
62
-
63
- ### Dataset Summary
64
-
65
- A Persian translation dataset (English -> Persian).
66
-
67
- ### Supported Tasks and Leaderboards
68
-
69
- [More Information Needed]
70
-
71
- ### Languages
72
-
73
- The text dataset is in Persian (`fa`) and English (`en`).
74
-
75
- ## Dataset Structure
76
-
77
- ### Data Instances
78
-
79
- Here is an example from the dataset:
80
- ```json
81
- {
82
- "source": "how toil to raise funds, propagate reforms, initiate institutions!",
83
- "targets": ["چه زحمت‌ها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد."],
84
- "category": "mizan_dev_en_fa"
85
- }
86
- ```
87
-
88
- ### Data Fields
89
-
90
- - `source`: the input sentences, in English.
91
- - `targets`: the list of gold target translations in Persian.
92
- - `category`: the source from which the dataset is mined.
93
-
94
- ### Data Splits
95
-
96
- The train/de/test split contains 1,621,666/2,138/48,360 samples.
97
-
98
- ## Dataset Creation
99
-
100
- ### Curation Rationale
101
-
102
- For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
103
-
104
- ### Source Data
105
-
106
- #### Initial Data Collection and Normalization
107
-
108
- [More Information Needed]
109
-
110
- #### Who are the source language producers?
111
-
112
- [More Information Needed]
113
-
114
- ### Annotations
115
-
116
- #### Annotation process
117
-
118
- [More Information Needed]
119
-
120
- #### Who are the annotators?
121
-
122
- [More Information Needed]
123
-
124
- ### Personal and Sensitive Information
125
-
126
- [More Information Needed]
127
-
128
- ## Considerations for Using the Data
129
-
130
- ### Social Impact of Dataset
131
-
132
- [More Information Needed]
133
-
134
- ### Discussion of Biases
135
-
136
- [More Information Needed]
137
-
138
- ### Other Known Limitations
139
-
140
- [More Information Needed]
141
-
142
- ## Additional Information
143
-
144
- ### Dataset Curators
145
-
146
- [More Information Needed]
147
-
148
- ### Licensing Information
149
-
150
- CC BY-NC-SA 4.0 License
151
-
152
- ### Citation Information
153
- ```bibtex
154
- @article{huggingface:dataset,
155
- title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
156
- authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
157
- year={2020}
158
- journal = {arXiv e-prints},
159
- eprint = {2012.06154},
160
- }
161
- ```
162
-
163
- ### Contributions
164
-
165
- Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"parsinlu-repo": {"description": "A Persian translation dataset (English -> Persian). \n", "citation": "@article{huggingface:dataset,\n title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},\n authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},\n year={2020}\n journal = {arXiv e-prints},\n eprint = {2012.06154}, \n}\n", "homepage": "https://github.com/persiannlp/parsinlu/", "license": "CC BY-NC-SA 4.0", "features": {"source": {"dtype": "string", "id": null, "_type": "Value"}, "targets": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "parsinlu_reading_comprehension", "config_name": "parsinlu-repo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 273771491, "num_examples": 1621665, "dataset_name": "parsinlu_reading_comprehension"}, "test": {"name": "test", "num_bytes": 30039687, "num_examples": 48359, "dataset_name": "parsinlu_reading_comprehension"}, "validation": {"name": "validation", "num_bytes": 462962, "num_examples": 2137, "dataset_name": "parsinlu_reading_comprehension"}}, "download_checksums": {"https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_en_fa/train.tsv": {"num_bytes": 252687603, "checksum": "fbce4154a42a3fa8e3903c8ab4ae8cbac769a641de2274c0ed83ccf978f6b770"}, "https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_en_fa/dev.tsv": {"num_bytes": 435450, "checksum": "337a051b881709033da9c9e34405102feaca48879d59895d31c953e6cb880b74"}, "https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_en_fa/test.tsv": {"num_bytes": 29348716, "checksum": "f53b2613e8e63b94d26853b06e16e5f86a6c5600200ec07f99e2d7da4868a06a"}}, "download_size": 282471769, "post_processing_size": null, "dataset_size": 304274140, "size_in_bytes": 586745909}}
 
 
parsinlu-repo/parsinlu_translation_en_fa-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:939bd7e359f60ab0fbd53fa3bff17dd985de66479b14323f018b8f3ba382e114
3
+ size 12192941
parsinlu-repo/parsinlu_translation_en_fa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de52c3917090aa87f76a1750ef594b51263b84169c2abc8b5cad119de7eefc82
3
+ size 135105166
parsinlu-repo/parsinlu_translation_en_fa-validation.parquet ADDED
Binary file (242 kB). View file
 
parsinlu_translation_en_fa.py DELETED
@@ -1,143 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """ParsiNLU Persian reading comprehension task"""
16
-
17
- from __future__ import absolute_import, division, print_function
18
-
19
- import csv
20
- import json
21
-
22
- import datasets
23
-
24
-
25
- logger = datasets.logging.get_logger(__name__)
26
-
27
- _CITATION = """\
28
- @article{huggingface:dataset,
29
- title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
30
- authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
31
- year={2020}
32
- journal = {arXiv e-prints},
33
- eprint = {2012.06154},
34
- }
35
- """
36
-
37
- # You can copy an official description
38
- _DESCRIPTION = """\
39
- A Persian translation dataset (English -> Persian).
40
- """
41
-
42
- _HOMEPAGE = "https://github.com/persiannlp/parsinlu/"
43
-
44
- _LICENSE = "CC BY-NC-SA 4.0"
45
-
46
- _URL = "https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_en_fa/"
47
- _URLs = {
48
- "train": _URL + "train.tsv",
49
- "dev": _URL + "dev.tsv",
50
- "test": _URL + "test.tsv",
51
- }
52
-
53
-
54
- class ParsinluReadingComprehension(datasets.GeneratorBasedBuilder):
55
- """ParsiNLU Persian reading comprehension task."""
56
-
57
- VERSION = datasets.Version("1.0.0")
58
-
59
- BUILDER_CONFIGS = [
60
- datasets.BuilderConfig(
61
- name="parsinlu-repo", version=VERSION, description="ParsiNLU repository: translation"
62
- ),
63
- ]
64
-
65
- def _info(self):
66
- features = datasets.Features(
67
- {
68
- "source": datasets.Value("string"),
69
- "targets": datasets.features.Sequence(
70
- datasets.Value("string")
71
- ),
72
- "category": datasets.Value("string"),
73
- }
74
- )
75
-
76
- return datasets.DatasetInfo(
77
- # This is the description that will appear on the datasets page.
78
- description=_DESCRIPTION,
79
- # This defines the different columns of the dataset and their types
80
- features=features, # Here we define them above because they are different between the two configurations
81
- # If there's a common (input, target) tuple from the features,
82
- # specify them here. They'll be used if as_supervised=True in
83
- # builder.as_dataset.
84
- supervised_keys=None,
85
- # Homepage of the dataset for documentation
86
- homepage=_HOMEPAGE,
87
- # License for the dataset if available
88
- license=_LICENSE,
89
- # Citation for the dataset
90
- citation=_CITATION,
91
- )
92
-
93
- def _split_generators(self, dl_manager):
94
- data_dir = dl_manager.download_and_extract(_URLs)
95
- return [
96
- datasets.SplitGenerator(
97
- name=datasets.Split.TRAIN,
98
- # These kwargs will be passed to _generate_examples
99
- gen_kwargs={
100
- "filepath": data_dir["train"],
101
- "split": "train",
102
- },
103
- ),
104
- datasets.SplitGenerator(
105
- name=datasets.Split.TEST,
106
- # These kwargs will be passed to _generate_examples
107
- gen_kwargs={"filepath": data_dir["test"], "split": "test"},
108
- ),
109
- datasets.SplitGenerator(
110
- name=datasets.Split.VALIDATION,
111
- # These kwargs will be passed to _generate_examples
112
- gen_kwargs={
113
- "filepath": data_dir["dev"],
114
- "split": "dev",
115
- },
116
- ),
117
- ]
118
-
119
- def _generate_examples(self, filepath, split):
120
- logger.info("generating examples from = %s", filepath)
121
-
122
- print(filepath)
123
- with open(filepath) as f:
124
- for id_, row in enumerate(f.readlines()):
125
- try:
126
- if id_ == 0:
127
- continue
128
- row = row.split("\t")
129
-
130
- if len(row) < 3:
131
- print(f"* Ignoring the following line since it doesn't have three columns: {row}")
132
- continue
133
- source = row[0].replace("\t", "").replace("\n", "")
134
- targets = row[1].replace("\t", "").replace("\n", "").split('///')
135
- category = row[2].replace("\t", "").replace("\n", "")
136
- yield id_, {
137
- 'source': source,
138
- 'targets': targets,
139
- 'category': category,
140
- }
141
- except:
142
- print(" * skipping . . . ")
143
-