rdemorais commited on
Commit
6efd2b4
1 Parent(s): f37535f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - fill-mask
5
+ - text-generation
6
+ language:
7
+ - pt
8
+ size_categories:
9
+ - 10M<n<100M
10
  ---
11
+
12
+ ## Description
13
+
14
+ This is a clenned version of AllenAI mC4 PtBR section. The original dataset can be found here https://huggingface.co/datasets/allenai/c4
15
+
16
+ ## Clean procedure
17
+
18
+ We applied the same clenning procedure as explained here: https://gitlab.com/yhavinga/c4nlpreproc.git
19
+
20
+ The repository offers two strategy. The first one, found in the main.py file, uses pyspark to create a dataframe that can do both clean the text, and create a pseudo mix on the entire dataset.
21
+ We found this strategy clever, but it is time/resource comsuming.
22
+
23
+ TO overcome this we jumped into the second approach consisting in leverage the singlefile.py script and parallel all together.
24
+
25
+ We did the following:
26
+
27
+ ```
28
+ GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/allenai/c4
29
+ cd c4
30
+ git lfs pull --include "multilingual/c4-pt.*.json.gz"
31
+
32
+
33
+ ls c4-nl* | parallel --gnu --jobs 96 --progress python ~/c4nlpreproc/singlefile.py {}
34
+ ```
35
+
36
+ Be advice you should install parallel first if you want to reproduce this dataset, or to create another to a different language.
37
+
38
+ ## Dataset Structure
39
+
40
+ We kept the same structure from the original, so it is like this:
41
+
42
+ ```
43
+ {
44
+ 'timestamp': '2020-02-22T22:24:31Z',
45
+ 'url': 'https://url here',
46
+ 'text': 'the content'
47
+ }
48
+ ```
49
+
50
+ ## Considerations for Using the Data
51
+
52
+ We do not perform any procedure to remove bad words, vulgarity or profanity. it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
53
+