Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Russian
ArXiv:
Libraries:
Datasets
Dask
License:
IlyaGusev parquet-converter commited on
Commit
c4d8e46
·
verified ·
1 Parent(s): 3faadb1

- Update parquet files (565e71a046c25eb909d275d7dcdc65f27a3d79d3)
- Update parquet files (68b960c0ca9b73e9f54194e7dc209c8ce32feb00)
- Update parquet files (60eedbdbcb6836654a0880ae94bdeea97a11d578)
- Update parquet files (d14bb66e025ceb6c041aec83e5eace1596e8a5c9)
- Update parquet files (c22d2cd1542799b6d9572b3de8d672ecdf3c6087)
- Update duckdb index files (826ff2fcac8e563cca8386a1d52d972023e4912c)
- Update duckdb index files (5fd1bf0744acb26f4ce849c585eb3cdd910b841f)
- Update parquet files (4a9537b0315a45a6206bbd85f7667d1db267f369)
- Update duckdb index files (914963c8c89dff2845a21d3326e77f34d848aa75)
- Update duckdb index files (11f94f8bc0ac5210fc2e552685657649daa1ab4d)
- Update duckdb index files (b07cc38ddc37d89088e5ddaeb9cab37c98223ad7)
- Update duckdb index files (629c6ddab8d13c13db3b8b20a9993dec00968668)
- Update parquet files (2a31b0dea63d42c760651fd9bd8f84117a8d0803)
- Update duckdb index files (621ee9b95cc969eb7a081af5ae24ef1309f14003)
- Update duckdb index files (00b8045a1c34d15fd814ed2c9a78f73e437762bc)
- Update duckdb index files (653da45d67cc25d7a7d06c990e28357806385360)
- Update parquet files (3e33755b20cd6a2776f5b62963e8fa0bd5871920)
- Update duckdb index files (11736866cdcae128b601faee79b4243187012e90)
- Update duckdb index files (8e947aaf9f97beebc5b3bbae648edaa4014466db)
- Update duckdb index files (6217bfb5c4fde1ca53fe92f92589cea669e675e1)
- Update duckdb index files (d38d8300232e6b1a5a83ac5b9b1628453ff643b5)
- Update duckdb index files (a9ef952f33dab4b8dc652bd330ad3671736ce202)
- Update duckdb index files (8caa4ac13292793598e3b0174bd1b2bef17e8f4b)
- Update parquet files (ec615ca73fb3a5a7a7861205907db112fd80cade)
- Update duckdb index files (3dc0a84a98401a3625c7576d4a212d1d161e5f03)
- Update duckdb index files (3f2425ff352e31a6bb4e0cc3c134aee0b1948837)
- Update duckdb index files (7d0518972f11c59391d6d09de9d41744a07b7234)
- Update duckdb index files (8abdafb262edd9c061c652d3dde5a4db047b0b82)
- Update duckdb index files (e7c5167113fa7a89146fe4086d476211775df9f8)
- Update duckdb index files (5186c676cf0878edb9ae197692ca03b553b7f94e)
- Delete old duckdb index files (66e06aa30486cd4e5b6703aa9fca32b22725ea7a)
- Merge branch 'convert/parquet' into pr/6 (e474ca72554e54ccce821bed9e3b108081cdac4e)
- fix info (062db4df53eb82088c7874c588a12425b5cc30eb)


Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

.gitattributes CHANGED
@@ -28,3 +28,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
  gazeta_train.jsonl filter=lfs diff=lfs merge=lfs -text
29
  gazeta_val.jsonl filter=lfs diff=lfs merge=lfs -text
30
  gazeta_test.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
28
  gazeta_train.jsonl filter=lfs diff=lfs merge=lfs -text
29
  gazeta_val.jsonl filter=lfs diff=lfs merge=lfs -text
30
  gazeta_test.jsonl filter=lfs diff=lfs merge=lfs -text
31
+ default/test/index.duckdb filter=lfs diff=lfs merge=lfs -text
32
+ default/validation/index.duckdb filter=lfs diff=lfs merge=lfs -text
33
+ default/train/index.duckdb filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -18,6 +18,30 @@ multilinguality:
18
  source_datasets:
19
  - original
20
  paperswithcode_id: gazeta
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ---
22
 
23
  # Dataset Card for Gazeta
@@ -38,14 +62,11 @@ paperswithcode_id: gazeta
38
  - [Annotations](#annotations)
39
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
  - [Considerations for Using the Data](#considerations-for-using-the-data)
41
- - [Social Impact of Dataset](#social-impact-of-dataset)
42
  - [Discussion of Biases](#discussion-of-biases)
43
- - [Other Known Limitations](#other-known-limitations)
44
  - [Additional Information](#additional-information)
45
  - [Dataset Curators](#dataset-curators)
46
  - [Licensing Information](#licensing-information)
47
  - [Citation Information](#citation-information)
48
- - [Contributions](#contributions)
49
 
50
  ## Dataset Description
51
 
@@ -138,34 +159,16 @@ When the first version of the dataset was collected, there were no other dataset
138
 
139
  Texts and summaries were written by journalists at [Gazeta](https://www.gazeta.ru/).
140
 
141
- ### Annotations
142
-
143
- #### Annotation process
144
-
145
- [N/A]
146
-
147
- #### Who are the annotators?
148
-
149
- [N/A]
150
-
151
  ### Personal and Sensitive Information
152
 
153
  The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
154
 
155
  ## Considerations for Using the Data
156
 
157
- ### Social Impact of Dataset
158
-
159
- [More Information Needed]
160
-
161
  ### Discussion of Biases
162
 
163
  It is a dataset from a single source. Thus it has a constrained text style and event perspective.
164
 
165
- ### Other Known Limitations
166
-
167
- [More Information Needed]
168
-
169
  ## Additional Information
170
 
171
  ### Dataset Curators
@@ -191,7 +194,3 @@ Legal basis for distribution of the dataset: https://www.gazeta.ru/credits.shtml
191
  isbn="978-3-030-59082-6"
192
  }
193
  ```
194
-
195
- ### Contributions
196
-
197
- [N/A]
 
18
  source_datasets:
19
  - original
20
  paperswithcode_id: gazeta
21
+ dataset_info:
22
+ features:
23
+ - name: text
24
+ dtype: string
25
+ - name: summary
26
+ dtype: string
27
+ - name: title
28
+ dtype: string
29
+ - name: date
30
+ dtype: string
31
+ - name: url
32
+ dtype: string
33
+ splits:
34
+ - name: train
35
+ num_bytes: 547118436
36
+ num_examples: 60964
37
+ - name: validation
38
+ num_bytes: 55784053
39
+ num_examples: 6369
40
+ - name: test
41
+ num_bytes: 60816821
42
+ num_examples: 6793
43
+ download_size: 332486618
44
+ dataset_size: 663719310
45
  ---
46
 
47
  # Dataset Card for Gazeta
 
62
  - [Annotations](#annotations)
63
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
64
  - [Considerations for Using the Data](#considerations-for-using-the-data)
 
65
  - [Discussion of Biases](#discussion-of-biases)
 
66
  - [Additional Information](#additional-information)
67
  - [Dataset Curators](#dataset-curators)
68
  - [Licensing Information](#licensing-information)
69
  - [Citation Information](#citation-information)
 
70
 
71
  ## Dataset Description
72
 
 
159
 
160
  Texts and summaries were written by journalists at [Gazeta](https://www.gazeta.ru/).
161
 
 
 
 
 
 
 
 
 
 
 
162
  ### Personal and Sensitive Information
163
 
164
  The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
165
 
166
  ## Considerations for Using the Data
167
 
 
 
 
 
168
  ### Discussion of Biases
169
 
170
  It is a dataset from a single source. Thus it has a constrained text style and event perspective.
171
 
 
 
 
 
172
  ## Additional Information
173
 
174
  ### Dataset Curators
 
194
  isbn="978-3-030-59082-6"
195
  }
196
  ```
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Gazeta: Dataset for Automatic Summarization of Russian News", "citation": "\n@InProceedings{10.1007/978-3-030-59082-6_9,\n author=\"Gusev, Ilya\",\n editor=\"Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia\",\n title=\"Dataset for Automatic Summarization of Russian News\",\n booktitle=\"Artificial Intelligence and Natural Language\",\n year=\"2020\",\n publisher=\"Springer International Publishing\",\n address=\"Cham\",\n pages=\"122--134\",\n isbn=\"978-3-030-59082-6\"\n}\n", "homepage": "https://github.com/IlyaGusev/gazeta", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "date": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "text", "output": "summary"}, "task_templates": null, "builder_name": "gazeta_dataset", "config_name": "default", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 547118576, "num_examples": 60964, "dataset_name": "gazeta_dataset"}, "test": {"name": "test", "num_bytes": 60816841, "num_examples": 6793, "dataset_name": "gazeta_dataset"}, "validation": {"name": "validation", "num_bytes": 55784073, "num_examples": 6369, "dataset_name": "gazeta_dataset"}}, "download_checksums": {"gazeta_train.jsonl": {"num_bytes": 549801555, "checksum": "678ce0eab9b3026c9f3388c6f8b2e5a48c84590819e175a462cf15749bc0c60e"}, "gazeta_val.jsonl": {"num_bytes": 56064530, "checksum": "bb1e1edd75b9de85af193de473e301655f59e345be3c29ce9087326adada24fd"}, "gazeta_test.jsonl": {"num_bytes": 61115756, "checksum": "3963ca7e2313c4bb75a4140abd614e17d98199c9f03f03490ab6afb19bfbf6cf"}}, "download_size": 666981841, "post_processing_size": null, "dataset_size": 663719490, "size_in_bytes": 1330701331}}
 
 
gazeta_val.jsonl → default/test/0000.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb1e1edd75b9de85af193de473e301655f59e345be3c29ce9087326adada24fd
3
- size 56064530
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db72251f11b33a7d69a7de709a4026a19de04f8039ad3ef1696986ba13a6d959
3
+ size 30276385
gazeta_train.jsonl → default/train/0000.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:678ce0eab9b3026c9f3388c6f8b2e5a48c84590819e175a462cf15749bc0c60e
3
- size 549801555
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:932df194fa24b4bd3cc50ede7f79ebf9c0fb57eff0972dca336fa0e5ea747710
3
+ size 251643354
gazeta_test.jsonl → default/train/0001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3963ca7e2313c4bb75a4140abd614e17d98199c9f03f03490ab6afb19bfbf6cf
3
- size 61115756
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50ce4af4957fff7d1e6adf4ff6358a11abee870aece6e0536549f98141e5b697
3
+ size 22741670
default/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c59c0e15fa659bc3e33c150d605e1c8139646fc04e402540ea84568cf80f323e
3
+ size 27825209
gazeta.py DELETED
@@ -1,92 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and Ilya Gusev
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Gazeta: Dataset for Automatic Summarization of Russian News"""
18
-
19
-
20
- import json
21
- import os
22
-
23
- import datasets
24
-
25
-
26
- _CITATION = """
27
- @InProceedings{10.1007/978-3-030-59082-6_9,
28
- author="Gusev, Ilya",
29
- editor="Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia",
30
- title="Dataset for Automatic Summarization of Russian News",
31
- booktitle="Artificial Intelligence and Natural Language",
32
- year="2020",
33
- publisher="Springer International Publishing",
34
- address="Cham",
35
- pages="122--134",
36
- isbn="978-3-030-59082-6"
37
- }
38
- """
39
-
40
- _DESCRIPTION = "Dataset for automatic summarization of Russian news"
41
- _HOMEPAGE = "https://github.com/IlyaGusev/gazeta"
42
- _URLS = {
43
- "train": "gazeta_train.jsonl",
44
- "val": "gazeta_val.jsonl",
45
- "test": "gazeta_test.jsonl"
46
- }
47
- _DOCUMENT = "text"
48
- _SUMMARY = "summary"
49
-
50
-
51
- class GazetaDataset(datasets.GeneratorBasedBuilder):
52
- """Gazeta Dataset"""
53
-
54
- VERSION = datasets.Version("2.0.0")
55
-
56
- BUILDER_CONFIGS = [
57
- datasets.BuilderConfig(name="default", version=VERSION, description=""),
58
- ]
59
-
60
- DEFAULT_CONFIG_NAME = "default"
61
-
62
- def _info(self):
63
- features = datasets.Features(
64
- {
65
- _DOCUMENT: datasets.Value("string"),
66
- _SUMMARY: datasets.Value("string"),
67
- "title": datasets.Value("string"),
68
- "date": datasets.Value("string"),
69
- "url": datasets.Value("string")
70
- }
71
- )
72
- return datasets.DatasetInfo(
73
- description=_DESCRIPTION,
74
- features=features,
75
- supervised_keys=(_DOCUMENT, _SUMMARY),
76
- homepage=_HOMEPAGE,
77
- citation=_CITATION,
78
- )
79
-
80
- def _split_generators(self, dl_manager):
81
- downloaded_files = dl_manager.download_and_extract(_URLS)
82
- return [
83
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
84
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
85
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["val"]}),
86
- ]
87
-
88
- def _generate_examples(self, filepath):
89
- with open(filepath, encoding="utf-8") as f:
90
- for id_, row in enumerate(f):
91
- data = json.loads(row)
92
- yield id_, data