rdemorais commited on
Commit
f8eb2f1
1 Parent(s): 6efd2b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -17,10 +17,10 @@ This is a clenned version of AllenAI mC4 PtBR section. The original dataset can
17
 
18
  We applied the same clenning procedure as explained here: https://gitlab.com/yhavinga/c4nlpreproc.git
19
 
20
- The repository offers two strategy. The first one, found in the main.py file, uses pyspark to create a dataframe that can do both clean the text, and create a pseudo mix on the entire dataset.
21
- We found this strategy clever, but it is time/resource comsuming.
22
 
23
- TO overcome this we jumped into the second approach consisting in leverage the singlefile.py script and parallel all together.
24
 
25
  We did the following:
26
 
@@ -33,11 +33,11 @@ git lfs pull --include "multilingual/c4-pt.*.json.gz"
33
  ls c4-nl* | parallel --gnu --jobs 96 --progress python ~/c4nlpreproc/singlefile.py {}
34
  ```
35
 
36
- Be advice you should install parallel first if you want to reproduce this dataset, or to create another to a different language.
37
 
38
  ## Dataset Structure
39
 
40
- We kept the same structure from the original, so it is like this:
41
 
42
  ```
43
  {
@@ -49,5 +49,5 @@ We kept the same structure from the original, so it is like this:
49
 
50
  ## Considerations for Using the Data
51
 
52
- We do not perform any procedure to remove bad words, vulgarity or profanity. it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
53
 
 
17
 
18
  We applied the same clenning procedure as explained here: https://gitlab.com/yhavinga/c4nlpreproc.git
19
 
20
+ The repository offers two strategies. The first one, found in the main.py file, uses pyspark to create a dataframe that can both clean the text and create a
21
+ pseudo mix on the entire dataset. We found this strategy clever, but it is time/resource-consuming.
22
 
23
+ To overcome this we jumped into the second approach consisting in leverage the singlefile.py script and parallel all together.
24
 
25
  We did the following:
26
 
 
33
  ls c4-nl* | parallel --gnu --jobs 96 --progress python ~/c4nlpreproc/singlefile.py {}
34
  ```
35
 
36
+ Be advice you should install parallel first if you want to reproduce this dataset, or to create another in a different language.
37
 
38
  ## Dataset Structure
39
 
40
+ We kept the same structure as the original, so it is like this:
41
 
42
  ```
43
  {
 
49
 
50
  ## Considerations for Using the Data
51
 
52
+ We do not perform any procedure to remove bad words, vulgarity, or profanity. it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
53