acul3 commited on
Commit
af9e37e
1 Parent(s): ec06f4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -29
README.md CHANGED
@@ -1,63 +1,191 @@
 
1
  ---
 
2
  license: cc
 
3
  annotations_creators:
 
4
  - no-annotation
 
5
  language_creators:
 
6
  - found
 
7
  language:
 
8
  - id
 
9
  source_datasets:
 
10
  - original
 
11
  task_categories:
 
12
  - sequence-modeling
 
13
  task_ids:
 
14
  - language-modeling
 
15
  paperswithcode_id: oscar
 
16
  ---
17
 
 
 
18
  ### Dataset Summary
19
 
20
- KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian Only Extract from Common Crawl snapshots ,each snapshots get extracted using ungoliant and get extra "filtering" using deduplication technique
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- detail soon
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ### Citation Information
26
 
 
27
  ```
28
 
 
 
29
  @ARTICLE{2022arXiv220106642A,
30
- author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
31
- title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
32
- journal = {arXiv e-prints},
33
- keywords = {Computer Science - Computation and Language},
34
- year = 2022,
35
- month = jan,
36
- eid = {arXiv:2201.06642},
37
- pages = {arXiv:2201.06642},
 
 
 
 
 
 
 
 
 
38
  archivePrefix = {arXiv},
39
- eprint = {2201.06642},
40
- primaryClass = {cs.CL},
41
- adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
42
- adsnote = {Provided by the SAO/NASA Astrophysics Data System}
 
 
 
 
 
43
  }
44
-
45
  @inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
46
- author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
47
- title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
48
- series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
49
- editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
50
- publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
51
- address = {Mannheim},
52
- doi = {10.14618/ids-pub-10468},
53
- url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
54
- pages = {1 -- 9},
55
- year = {2021},
56
- abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
57
- language = {en}
58
- }
59
 
 
60
 
61
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
 
 
63
 
 
 
1
+
2
  ---
3
+
4
  license: cc
5
+
6
  annotations_creators:
7
+
8
  - no-annotation
9
+
10
  language_creators:
11
+
12
  - found
13
+
14
  language:
15
+
16
  - id
17
+
18
  source_datasets:
19
+
20
  - original
21
+
22
  task_categories:
23
+
24
  - sequence-modeling
25
+
26
  task_ids:
27
+
28
  - language-modeling
29
+
30
  paperswithcode_id: oscar
31
+
32
  ---
33
 
34
+
35
+
36
  ### Dataset Summary
37
 
38
+ KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using [ungoliant](https://github.com/oscar-corpus/ungoliant), each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
39
+
40
+
41
+ ### Preprocessing
42
+
43
+ Each folder name inside snapshots folder denoted preprocessing technique that has been applied .
44
+
45
+ - Raw
46
+ - this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below)
47
+ - use same "raw cc snapshot" for `2021_10` and `2021_49` which can be found in oscar dataset ([2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/tree/main/packaged_nondedup/id) and [2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/tree/main/compressed/id_meta))
48
+ - Dedup
49
+ - use data from raw folder
50
+ - apply cleaning techniques for every text in documents such as
51
+ - fix html
52
+ - remove noisy unicode
53
+ - fix news tag
54
+ - remove control char
55
+ - filter by removing short text (20 words)
56
+ - filter by character ratio occurred inside text such as
57
+ - min_alphabet_ratio (0.75)
58
+ - max_upper_ratio (0.10)
59
+ - max_number_ratio(0.05)
60
+
61
+ - filter by exact dedup technique
62
+ - hash all text with md5 hashlib
63
+ - remove non-unique hash
64
+ - full code about dedup step adapted from [here](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned/tree/main)
65
+ - Neardup
66
+ - use data from dedup folder
67
+
68
+ - create index cluster using neardup [Minhash and LSH](http://ekzhu.com/datasketch/lsh.html) with following config :
69
+ - use 128 permuation
70
+ - 6 n-grams size
71
+ - use word tokenization (split sentence by space)
72
+ - use 0.8 as similarity score
73
+
74
+ - fillter by removing all index from cluster
75
+ - full code about neardup step adapted from [here](https://github.com/ChenghaoMou/text-dedup)
76
+ - Neardup_clean
77
+ - use data from neardup folder
78
+ - Removing documents containing words from a selection of the [Indonesian Bad Words](https://github.com/acul3/c4_id_processed/blob/67e10c086d43152788549ef05b7f09060e769993/clean/badwords_ennl.py#L64).
79
+
80
+
81
+ - Removing sentences containing:
82
+
83
+ - Less than 3 words.
84
+
85
+ - A word longer than 1000 characters.
86
+
87
+ - An end symbol not matching end-of-sentence punctuation.
88
+
89
+ - Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in indonesia
90
+
91
+ - Removing documents (after sentence filtering):
92
+
93
+ - Containing less than 5 sentences.
94
+
95
+ - Containing less than 500 or more than 50'000 characters.
96
+ - full code about neardup_clean step adapted from [here](https://gitlab.com/yhavinga/c4nlpreproc)
97
+
98
+
99
+ ## Dataset Structure
100
 
101
+ ### Data Instances
102
 
103
+ An example from the dataset:
104
+ ```
105
+
106
+ {'text': 'Panitia Kerja (Panja) pembahasan RUU Cipta Kerja (Ciptaker) DPR RI memastikan naskah UU Ciptaker sudah final, tapi masih dalam penyisiran. Penyisiran dilakukan agar isi UU Ciptaker sesuai dengan kesepakatan dalam pembahasan dan tidak ada salah pengetikan (typo).\n"Kan memang sudah diumumkan, naskah final itu sudah. Cuma kita sekarang … DPR itu kan punya waktu 7 hari sebelum naskah resminya kita kirim ke pemerintah. Nah, sekarang itu kita sisir, jangan sampai ada yang salah pengetikan, tapi tidak mengubah substansi," kata Ketua Panja RUU Ciptaker Supratman Andi Agtas saat berbincang dengan detikcom, Jumat (9/10/2020) pukul 10.56 WIB.\nSupratman mengungkapkan Panja RUU Ciptaker menggelar rapat hari ini untuk melakukan penyisiran terhadap naskah UU Ciptaker. Panja, sebut dia, bekerja sama dengan pemerintah dan ahli bahasa untuk melakukan penyisiran naskah.\n"Sebentar, siang saya undang seluruh poksi-poksi (kelompok fraksi) Baleg (Badan Legislasi DPR), anggota Panja itu datang ke Baleg untuk melihat satu per satu, jangan sampai …. Karena kan sekarang ini tim dapur pemerintah dan DPR lagi bekerja bersama dengan ahli bahasa melihat jangan sampai ada yang typo, redundant," terangnya.\nSupratman membenarkan bahwa naskah UU Ciptaker yang final itu sudah beredar. Ketua Baleg DPR itu memastikan penyisiran yang dilakukan tidak mengubah substansi setiap pasal yang telah melalui proses pembahasan.\n"Itu yang sudah dibagikan. Tapi kan itu substansinya yang tidak mungkin akan berubah. Nah, kita pastikan nih dari sisi drafting-nya yang jadi kita pastikan," tutur Supratman.\nLebih lanjut Supratman menjelaskan DPR memiliki waktu 7 hari untuk melakukan penyisiran. Anggota DPR dari Fraksi Gerindra itu memastikan paling lambat Selasa (13/10) pekan depan, naskah UU Ciptaker sudah bisa diakses oleh masyarakat melalui situs DPR.\n"Kita itu, DPR, punya waktu sampai 7 hari kerja. Jadi harusnya hari Selasa sudah final semua, paling lambat. Tapi saya usahakan hari ini bisa final. Kalau sudah final, semua itu langsung bisa diakses di web DPR," terang Supratman.\nDiberitakan sebelumnya, Wakil Ketua Baleg DPR Achmad Baidowi mengakui naskah UU Ciptaker yang telah disahkan di paripurna DPR masih dalam proses pengecekan untuk menghindari kesalahan pengetikan. Anggota Komisi VI DPR itu menyinggung soal salah ketik dalam revisi UU KPK yang disahkan pada 2019.\n"Mengoreksi yang typo itu boleh, asalkan tidak mengubah substansi. Jangan sampai seperti tahun lalu, ada UU salah ketik soal umur \'50 (empat puluh)\', sehingga pemerintah harus mengonfirmasi lagi ke DPR," ucap Baidowi, Kamis (8/10).',
107
+ 'url': 'https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726',
108
+ 'timestamp': '2021-10-22T04:09:47Z',
109
+ 'meta': '{"warc_headers": {"content-length": "2747", "content-type": "text/plain", "warc-date": "2021-10-22T04:09:47Z", "warc-record-id": "<urn:uuid:a5b2cc09-bd2b-4d0e-9e5b-2fcc5fce47cb>", "warc-identified-content-language": "ind,eng", "warc-target-uri": "https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726", "warc-block-digest": "sha1:65AWBDBLS74AGDCGDBNDHBHADOKSXCKV", "warc-type": "conversion", "warc-refers-to": "<urn:uuid:b7ceadba-7120-4e38-927c-a50db21f0d4f>"}, "identification": {"label": "id", "prob": 0.6240405}, "annotations": null, "line_identifications": [null, {"label": "id", "prob": 0.9043896}, null, null, {"label": "id", "prob": 0.87111086}, {"label": "id", "prob": 0.9095224}, {"label": "id", "prob": 0.8579232}, {"label": "id", "prob": 0.81366056}, {"label": "id", "prob": 0.9286813}, {"label": "id", "prob": 0.8435194}, {"label": "id", "prob": 0.8387821}, null]}'}
110
+ ```
111
+ ### Data Fields
112
+ The data contains the following fields:
113
+ - `url`: url of the source as a string
114
+ - `text`: text content as a string
115
+ - `timestamp`: timestamp of extraction as a string
116
+ - `meta` : json representation of the original from ungoliant tools,can be found [here](https://oscar-corpus.com/post/oscar-v22-01/) (warc_heder)
117
+
118
+
119
+
120
+
121
+
122
 
123
  ### Citation Information
124
 
125
+
126
  ```
127
 
128
+
129
+
130
  @ARTICLE{2022arXiv220106642A,
131
+
132
+ author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
133
+
134
+ title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
135
+
136
+ journal = {arXiv e-prints},
137
+
138
+ keywords = {Computer Science - Computation and Language},
139
+
140
+ year = 2022,
141
+
142
+ month = jan,
143
+
144
+ eid = {arXiv:2201.06642},
145
+
146
+ pages = {arXiv:2201.06642},
147
+
148
  archivePrefix = {arXiv},
149
+
150
+ eprint = {2201.06642},
151
+
152
+ primaryClass = {cs.CL},
153
+
154
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
155
+
156
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
157
+
158
  }
159
+
160
  @inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
 
 
 
 
 
 
 
 
 
 
 
 
 
161
 
162
+ author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
163
 
164
+ title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
165
+
166
+ series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
167
+
168
+ editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
169
+
170
+ publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
171
+
172
+ address = {Mannheim},
173
+
174
+ doi = {10.14618/ids-pub-10468},
175
+
176
+ url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
177
+
178
+ pages = {1 -- 9},
179
+
180
+ year = {2021},
181
+
182
+ abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
183
+
184
+ language = {en}
185
+
186
+ }
187
 
188
+
189
+
190
 
191
+ ```