parquet-converter
commited on
Commit
•
2aae1c4
1
Parent(s):
20891d6
Update parquet files
Browse files- README.md +0 -219
- cc_news.py +0 -114
- dataset_infos.json +0 -1
- plain_text/cc_news-train-00000-of-00005.parquet +3 -0
- plain_text/cc_news-train-00001-of-00005.parquet +3 -0
- plain_text/cc_news-train-00002-of-00005.parquet +3 -0
- plain_text/cc_news-train-00003-of-00005.parquet +3 -0
- plain_text/cc_news-train-00004-of-00005.parquet +3 -0
README.md
DELETED
@@ -1,219 +0,0 @@
|
|
1 |
-
---
|
2 |
-
pretty_name: CC-News
|
3 |
-
annotations_creators:
|
4 |
-
- no-annotation
|
5 |
-
language_creators:
|
6 |
-
- found
|
7 |
-
language:
|
8 |
-
- en
|
9 |
-
license:
|
10 |
-
- unknown
|
11 |
-
multilinguality:
|
12 |
-
- monolingual
|
13 |
-
size_categories:
|
14 |
-
- 100K<n<1M
|
15 |
-
source_datasets:
|
16 |
-
- original
|
17 |
-
task_categories:
|
18 |
-
- text-generation
|
19 |
-
- fill-mask
|
20 |
-
task_ids:
|
21 |
-
- language-modeling
|
22 |
-
- masked-language-modeling
|
23 |
-
paperswithcode_id: cc-news
|
24 |
-
dataset_info:
|
25 |
-
features:
|
26 |
-
- name: title
|
27 |
-
dtype: string
|
28 |
-
- name: text
|
29 |
-
dtype: string
|
30 |
-
- name: domain
|
31 |
-
dtype: string
|
32 |
-
- name: date
|
33 |
-
dtype: string
|
34 |
-
- name: description
|
35 |
-
dtype: string
|
36 |
-
- name: url
|
37 |
-
dtype: string
|
38 |
-
- name: image_url
|
39 |
-
dtype: string
|
40 |
-
config_name: plain_text
|
41 |
-
splits:
|
42 |
-
- name: train
|
43 |
-
num_bytes: 2016418133
|
44 |
-
num_examples: 708241
|
45 |
-
download_size: 845131146
|
46 |
-
dataset_size: 2016418133
|
47 |
-
---
|
48 |
-
|
49 |
-
# Dataset Card for CC-News
|
50 |
-
|
51 |
-
## Table of Contents
|
52 |
-
- [Dataset Description](#dataset-description)
|
53 |
-
- [Dataset Summary](#dataset-summary)
|
54 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
55 |
-
- [Languages](#languages)
|
56 |
-
- [Dataset Structure](#dataset-structure)
|
57 |
-
- [Data Instances](#data-instances)
|
58 |
-
- [Data Fields](#data-fields)
|
59 |
-
- [Data Splits](#data-splits)
|
60 |
-
- [Dataset Creation](#dataset-creation)
|
61 |
-
- [Curation Rationale](#curation-rationale)
|
62 |
-
- [Source Data](#source-data)
|
63 |
-
- [Annotations](#annotations)
|
64 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
65 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
66 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
67 |
-
- [Discussion of Biases](#discussion-of-biases)
|
68 |
-
- [Other Known Limitations](#other-known-limitations)
|
69 |
-
- [Additional Information](#additional-information)
|
70 |
-
- [Dataset Curators](#dataset-curators)
|
71 |
-
- [Licensing Information](#licensing-information)
|
72 |
-
- [Citation Information](#citation-information)
|
73 |
-
- [Contributions](#contributions)
|
74 |
-
|
75 |
-
## Dataset Description
|
76 |
-
|
77 |
-
- **Homepage:** [CC-News homepage](https://commoncrawl.org/2016/10/news-dataset-available/)
|
78 |
-
- **Point of Contact:** [Vladimir Blagojevic](mailto:dovlex@gmail.com)
|
79 |
-
|
80 |
-
### Dataset Summary
|
81 |
-
|
82 |
-
CC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/.
|
83 |
-
This version of the dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news.
|
84 |
-
It contains 708241 English language news articles published between Jan 2017 and December 2019.
|
85 |
-
It represents a small portion of the English language subset of the CC-News dataset.
|
86 |
-
|
87 |
-
### Supported Tasks and Leaderboards
|
88 |
-
|
89 |
-
CC-News has been mostly used for language model training.
|
90 |
-
|
91 |
-
### Languages
|
92 |
-
|
93 |
-
The text in the dataset is in the English language.
|
94 |
-
|
95 |
-
## Dataset Structure
|
96 |
-
|
97 |
-
### Data Instances
|
98 |
-
|
99 |
-
Dataset instance contains an article itself and the relevant article fields.
|
100 |
-
An example from the Cc-New train set looks as follows:
|
101 |
-
```
|
102 |
-
{
|
103 |
-
'date': '2017-08-14 00:00:00',
|
104 |
-
'description': '"The spirit of Green Day has always been about rising above oppression."',
|
105 |
-
'domain': '1041jackfm.cbslocal.com',
|
106 |
-
'image_url': 'https://cbs1041jackfm.files.wordpress.com/2017/08/billie-joe-armstrong-theo-wargo-getty-images.jpg?w=946',
|
107 |
-
'text': 'By Abby Hassler\nGreen Day’s Billie Joe Armstrong has always been outspoken about his political beliefs. Following
|
108 |
-
the tragedy in Charlottesville, Virgina, over the weekend, Armstrong felt the need to speak out against the white supremacists
|
109 |
-
who caused much of the violence.\nRelated: Billie Joe Armstrong Wins #TBT with Childhood Studio Photo\n“My heart feels heavy.
|
110 |
-
I feel like what happened in Charlottesville goes beyond the point of anger,” Armstrong wrote on Facebook. “It makes me sad
|
111 |
-
and desperate. shocked. I f—— hate racism more than anything.”\n“The spirit of Green Day has always been about rising above
|
112 |
-
oppression. and sticking up for what you believe in and singing it at the top of your lungs,” Armstrong continued.
|
113 |
-
“We grew up fearing nuclear holocaust because of the cold war. those days are feeling way too relevant these days.
|
114 |
-
these issues are our ugly past.. and now it’s coming to haunt us. always resist these doomsday politicians. and in the
|
115 |
-
words of our punk forefathers .. Nazi punks f— off.”',
|
116 |
-
'title': 'Green Day’s Billie Joe Armstrong Rails Against White Nationalists',
|
117 |
-
'url': 'http://1041jackfm.cbslocal.com/2017/08/14/billie-joe-armstrong-white-nationalists/'
|
118 |
-
}
|
119 |
-
```
|
120 |
-
|
121 |
-
### Data Fields
|
122 |
-
|
123 |
-
- `date`: date of publication
|
124 |
-
- `description`: description or a summary of the article
|
125 |
-
- `domain`: source domain of the article (i.e. www.nytimes.com)
|
126 |
-
- `image_url`: URL of the article's image
|
127 |
-
- `text`: the actual article text in raw form
|
128 |
-
- `title`: title of the article
|
129 |
-
- `url`: article URL, the original URL where it was scraped.
|
130 |
-
|
131 |
-
|
132 |
-
### Data Splits
|
133 |
-
|
134 |
-
CC-News dataset has only the training set, i.e. it has to be loaded with `train` split specified:
|
135 |
-
`cc_news = load_dataset('cc_news', split="train")`
|
136 |
-
|
137 |
-
## Dataset Creation
|
138 |
-
|
139 |
-
### Curation Rationale
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
-
|
143 |
-
### Source Data
|
144 |
-
|
145 |
-
#### Initial Data Collection and Normalization
|
146 |
-
|
147 |
-
CC-News dataset has been proposed, created, and maintained by Sebastian Nagel.
|
148 |
-
The data is publicly available on AWS S3 Common Crawl bucket at /crawl-data/CC-NEWS/.
|
149 |
-
This version of the dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an
|
150 |
-
integrated web crawler and information extractor for news.
|
151 |
-
It contains 708241 English language news articles published between Jan 2017 and December 2019.
|
152 |
-
Although news-please tags each news article with an appropriate language tag, these tags are somewhat unreliable.
|
153 |
-
To strictly isolate English language articles an additional check has been performed using
|
154 |
-
[Spacy langdetect pipeline](https://spacy.io/universe/project/spacy-langdetect).
|
155 |
-
We selected articles with text fields scores of 80% probability or more of being English.
|
156 |
-
There are no strict guarantees that each article has all the relevant fields. For example, 527595
|
157 |
-
articles have a valid description field. All articles have what appears to be a valid image URL,
|
158 |
-
but they have not been verified.
|
159 |
-
|
160 |
-
#### Who are the source language producers?
|
161 |
-
|
162 |
-
The news websites throughout the World.
|
163 |
-
|
164 |
-
### Annotations
|
165 |
-
|
166 |
-
#### Annotation process
|
167 |
-
|
168 |
-
[N/A]
|
169 |
-
|
170 |
-
#### Who are the annotators?
|
171 |
-
|
172 |
-
[N/A]
|
173 |
-
|
174 |
-
### Personal and Sensitive Information
|
175 |
-
|
176 |
-
As one can imagine, data contains contemporary public figures or individuals who appeared in the news.
|
177 |
-
|
178 |
-
## Considerations for Using the Data
|
179 |
-
|
180 |
-
### Social Impact of Dataset
|
181 |
-
|
182 |
-
The purpose of this dataset is to help language model researchers develop better language models.
|
183 |
-
|
184 |
-
### Discussion of Biases
|
185 |
-
|
186 |
-
[More Information Needed]
|
187 |
-
|
188 |
-
### Other Known Limitations
|
189 |
-
|
190 |
-
[More Information Needed]
|
191 |
-
|
192 |
-
## Additional Information
|
193 |
-
|
194 |
-
### Dataset Curators
|
195 |
-
|
196 |
-
[More Information Needed]
|
197 |
-
|
198 |
-
### Licensing Information
|
199 |
-
|
200 |
-
[More Information Needed]
|
201 |
-
|
202 |
-
### Citation Information
|
203 |
-
|
204 |
-
```
|
205 |
-
@InProceedings{Hamborg2017,
|
206 |
-
author = {Hamborg, Felix and Meuschke, Norman and Breitinger, Corinna and Gipp, Bela},
|
207 |
-
title = {news-please: A Generic News Crawler and Extractor},
|
208 |
-
year = {2017},
|
209 |
-
booktitle = {Proceedings of the 15th International Symposium of Information Science},
|
210 |
-
location = {Berlin},
|
211 |
-
doi = {10.5281/zenodo.4120316},
|
212 |
-
pages = {218--223},
|
213 |
-
month = {March}
|
214 |
-
}
|
215 |
-
```
|
216 |
-
|
217 |
-
### Contributions
|
218 |
-
|
219 |
-
Thanks to [@vblagoje](https://github.com/vblagoje) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
cc_news.py
DELETED
@@ -1,114 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
# Lint as: python3
|
17 |
-
"""The CC-News dataset is based on Common Crawl News Dataset by Sebastian Nagel"""
|
18 |
-
|
19 |
-
import json
|
20 |
-
import os
|
21 |
-
from fnmatch import fnmatch
|
22 |
-
|
23 |
-
import datasets
|
24 |
-
|
25 |
-
|
26 |
-
logger = datasets.logging.get_logger(__name__)
|
27 |
-
|
28 |
-
|
29 |
-
_DESCRIPTION = """\
|
30 |
-
CC-News containing news articles from news sites all over the world \
|
31 |
-
The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. \
|
32 |
-
This version of the dataset has 708241 articles. It represents a small portion of English \
|
33 |
-
language subset of the CC-News dataset created using news-please(Hamborg et al.,2017) to \
|
34 |
-
collect and extract English language portion of CC-News.
|
35 |
-
"""
|
36 |
-
|
37 |
-
_CITATION = """\
|
38 |
-
@InProceedings{Hamborg2017,
|
39 |
-
author = {Hamborg, Felix and Meuschke, Norman and Breitinger, Corinna and Gipp, Bela},
|
40 |
-
title = {news-please: A Generic News Crawler and Extractor},
|
41 |
-
year = {2017},
|
42 |
-
booktitle = {Proceedings of the 15th International Symposium of Information Science},
|
43 |
-
location = {Berlin},
|
44 |
-
doi = {10.5281/zenodo.4120316},
|
45 |
-
pages = {218--223},
|
46 |
-
month = {March}
|
47 |
-
}
|
48 |
-
"""
|
49 |
-
_PROJECT_URL = "https://commoncrawl.org/2016/10/news-dataset-available/"
|
50 |
-
_DOWNLOAD_URL = "https://storage.googleapis.com/huggingface-nlp/datasets/cc_news/cc_news.tar.gz"
|
51 |
-
|
52 |
-
|
53 |
-
class CCNewsConfig(datasets.BuilderConfig):
|
54 |
-
"""BuilderConfig for CCNews."""
|
55 |
-
|
56 |
-
def __init__(self, **kwargs):
|
57 |
-
"""BuilderConfig for CCNews.
|
58 |
-
Args:
|
59 |
-
**kwargs: keyword arguments forwarded to super.
|
60 |
-
"""
|
61 |
-
super(CCNewsConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
|
62 |
-
|
63 |
-
|
64 |
-
class CCNews(datasets.GeneratorBasedBuilder):
|
65 |
-
"""CC-News dataset."""
|
66 |
-
|
67 |
-
BUILDER_CONFIGS = [
|
68 |
-
CCNewsConfig(
|
69 |
-
name="plain_text",
|
70 |
-
description="Plain text",
|
71 |
-
)
|
72 |
-
]
|
73 |
-
|
74 |
-
def _info(self):
|
75 |
-
return datasets.DatasetInfo(
|
76 |
-
description=_DESCRIPTION,
|
77 |
-
features=datasets.Features(
|
78 |
-
{
|
79 |
-
"title": datasets.Value("string"),
|
80 |
-
"text": datasets.Value("string"),
|
81 |
-
"domain": datasets.Value("string"),
|
82 |
-
"date": datasets.Value("string"),
|
83 |
-
"description": datasets.Value("string"),
|
84 |
-
"url": datasets.Value("string"),
|
85 |
-
"image_url": datasets.Value("string"),
|
86 |
-
}
|
87 |
-
),
|
88 |
-
supervised_keys=None,
|
89 |
-
homepage=_PROJECT_URL,
|
90 |
-
citation=_CITATION,
|
91 |
-
)
|
92 |
-
|
93 |
-
def _split_generators(self, dl_manager):
|
94 |
-
archive = dl_manager.download(_DOWNLOAD_URL)
|
95 |
-
|
96 |
-
return [
|
97 |
-
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"files": dl_manager.iter_archive(archive)}),
|
98 |
-
]
|
99 |
-
|
100 |
-
def _generate_examples(self, files):
|
101 |
-
id_ = 0
|
102 |
-
for article_file_path, f in files:
|
103 |
-
if fnmatch(os.path.basename(article_file_path), "*.json"):
|
104 |
-
article = json.load(f)
|
105 |
-
yield id_, {
|
106 |
-
"title": article["title"].strip() if article["title"] is not None else "",
|
107 |
-
"text": article["maintext"].strip() if article["maintext"] is not None else "",
|
108 |
-
"domain": article["source_domain"].strip() if article["source_domain"] is not None else "",
|
109 |
-
"date": article["date_publish"].strip() if article["date_publish"] is not None else "",
|
110 |
-
"description": article["description"].strip() if article["description"] is not None else "",
|
111 |
-
"url": article["url"].strip() if article["url"] is not None else "",
|
112 |
-
"image_url": article["image_url"].strip() if article["image_url"] is not None else "",
|
113 |
-
}
|
114 |
-
id_ += 1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"plain_text": {"description": "CC-News containing news articles from news sites all over the world The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has 708241 articles. It represents a small portion of English language subset of the CC-News dataset created using news-please(Hamborg et al.,2017) to collect and extract English language portion of CC-News.\n", "citation": "@InProceedings{Hamborg2017,\n author = {Hamborg, Felix and Meuschke, Norman and Breitinger, Corinna and Gipp, Bela},\n title = {news-please: A Generic News Crawler and Extractor},\n year = {2017},\n booktitle = {Proceedings of the 15th International Symposium of Information Science},\n location = {Berlin},\n doi = {10.5281/zenodo.4120316},\n pages = {218--223},\n month = {March}\n}\n", "homepage": "https://commoncrawl.org/2016/10/news-dataset-available/", "license": "", "features": {"title": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "domain": {"dtype": "string", "id": null, "_type": "Value"}, "date": {"dtype": "string", "id": null, "_type": "Value"}, "description": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "image_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "cc_news", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2016418133, "num_examples": 708241, "dataset_name": "cc_news"}}, "download_checksums": {"https://storage.googleapis.com/huggingface-nlp/datasets/cc_news/cc_news.tar.gz": {"num_bytes": 845131146, "checksum": "1aaf8e5af33e3a73472b58afba48c6a839ebc2dd190c4e0754fc00f8899a9cec"}}, "download_size": 845131146, "post_processing_size": null, "dataset_size": 2016418133, "size_in_bytes": 2861549279}}
|
|
|
|
plain_text/cc_news-train-00000-of-00005.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d531cac1b1e4d30aea54c6ec58b795d9091279ded511f61b2b6a5c33488ba4b1
|
3 |
+
size 281568461
|
plain_text/cc_news-train-00001-of-00005.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:463918eb299857dee3aadf1f35eb97bd9fea41ec1c924534c23928eccb005108
|
3 |
+
size 278139405
|
plain_text/cc_news-train-00002-of-00005.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40fab2561998d6798f46859c435617864721a325efaf063347b6ae4fa757cd67
|
3 |
+
size 274043464
|
plain_text/cc_news-train-00003-of-00005.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d21c8fe66c6770c3828f975316c664a61ccbc95a0352bb361ba5be00faf0c64c
|
3 |
+
size 277516241
|
plain_text/cc_news-train-00004-of-00005.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69bec5a6af12e6cebb34ed9b71f6b51a577965544c00160dea9f7259ebbfa548
|
3 |
+
size 5538554
|