holylovenia commited on
Commit
15b44fd
1 Parent(s): 2053270

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - self-supervised-pretraining
11
  ---
12
 
13
- A thoroughly cleaned version of the Indonesia split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). This portion represents the Indonesian language content that has been extracted and processed from the larger mC4 dataset. The extraction and cleaning process was conducted by AllenAI and resulted in a curated collection of Indonesian language data. For more information about the original mC4 dataset and its preparation, please refer to the source hosted at the address https://huggingface.co/datasets/allenai/c4.
14
 
15
 
16
  ## Languages
@@ -20,25 +20,25 @@ ind
20
  ## Supported Tasks
21
 
22
  Self Supervised Pretraining
23
-
24
  ## Dataset Usage
25
  ### Using `datasets` library
26
  ```
27
- from datasets import load_dataset
28
- dset = datasets.load_dataset("SEACrowd/mc4_indo", trust_remote_code=True)
29
  ```
30
  ### Using `seacrowd` library
31
  ```import seacrowd as sc
32
  # Load the dataset using the default config
33
- dset = sc.load_dataset("mc4_indo", schema="seacrowd")
34
  # Check all available subsets (config names) of the dataset
35
- print(sc.available_config_names("mc4_indo"))
36
  # Load the dataset using a specific config
37
- dset = sc.load_dataset_by_config_name(config_name="<config_name>")
38
  ```
39
-
40
- More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
41
-
42
 
43
  ## Dataset Homepage
44
 
 
10
  - self-supervised-pretraining
11
  ---
12
 
13
+ A thoroughly cleaned version of the Indonesia split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). This portion represents the Indonesian language content that has been extracted and processed from the larger mC4 dataset. The extraction and cleaning process was conducted by AllenAI and resulted in a curated collection of Indonesian language data. For more information about the original mC4 dataset and its preparation, please refer to the source hosted at the address https://huggingface.co/datasets/allenai/c4.
14
 
15
 
16
  ## Languages
 
20
  ## Supported Tasks
21
 
22
  Self Supervised Pretraining
23
+
24
  ## Dataset Usage
25
  ### Using `datasets` library
26
  ```
27
+ from datasets import load_dataset
28
+ dset = datasets.load_dataset("SEACrowd/mc4_indo", trust_remote_code=True)
29
  ```
30
  ### Using `seacrowd` library
31
  ```import seacrowd as sc
32
  # Load the dataset using the default config
33
+ dset = sc.load_dataset("mc4_indo", schema="seacrowd")
34
  # Check all available subsets (config names) of the dataset
35
+ print(sc.available_config_names("mc4_indo"))
36
  # Load the dataset using a specific config
37
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
38
  ```
39
+
40
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
41
+
42
 
43
  ## Dataset Homepage
44