olemeyer commited on
Commit
90f09c7
·
1 Parent(s): 2c390aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -2
README.md CHANGED
@@ -19,6 +19,27 @@ configs:
19
  - split: train
20
  path: data/train-*
21
  ---
22
- # Dataset Card for "oscar_eu_6x3M"
23
 
24
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  - split: train
20
  path: data/train-*
21
  ---
22
+ # OSCAR EU 6x3M Dataset
23
 
24
+ ## Overview
25
+ The OSCAR EU 6x3M dataset is a carefully curated subset of the larger OSCAR corpus, specifically focusing on the main European languages. This dataset includes a balanced representation of six languages: English (en), German (de), Spanish (es), Italian (it), French (fr), and Russian (ru). The "6x3M" in the name signifies that each language is represented with approximately 3 million randomly sampled documents, providing a comprehensive and diverse linguistic resource.
26
+
27
+ ## Dataset Description
28
+ - **Languages Included**: English, German, Spanish, Italian, French, Russian
29
+ - **Number of Documents**: Approximately 18 million (3 million per language)
30
+ - **Data Source**: The dataset is derived from the OSCAR corpus, a large multilingual corpus created from the Common Crawl.
31
+ - **Data Format**: [Describe the format, e.g., JSON, CSV, etc.]
32
+
33
+ ## Use Cases
34
+ This dataset is ideal for a variety of natural language processing applications, including but not limited to:
35
+ - Multilingual language modeling
36
+ - Cross-linguistic transfer learning
37
+ - Language identification and classification
38
+ - Comparative linguistic studies
39
+
40
+ ## Accessing the Dataset
41
+ The dataset is available through the HuggingFace Datasets library. You can load the dataset using the following code snippet:
42
+ ```python
43
+ from datasets import load_dataset
44
+
45
+ dataset = load_dataset("oscar_eu_6x3M")