Pclanglais commited on
Commit
2188f70
1 Parent(s): 1296318

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - fr
6
+ - de
7
+ - it
8
+ - pt
9
+ - nl
10
+ - es
11
+ pretty_name: Common Corpus
12
+ size_categories:
13
+ - n>1T
14
+ task_categories:
15
+ - text-generation
16
+ tags:
17
+ - legal
18
+ - finance
19
+ - literature
20
+ - science
21
+ - code
22
+ ---
23
+
24
+ # Common Corpus
25
+
26
+ Common Corpus is the largest open and permissible licensed text dataset, comprising over 2 trillion tokens. It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more.
27
+
28
+ Common Corpus differs from existing open datasets in that it is:
29
+ * **Truly Open**: contains only data that is permissively licensed
30
+ * **Multilingual**: mostly representing English and French data, but contains data for XX languages
31
+ * **Diverse**: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
32
+ * **Extensively Curated**: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
33
+
34
+ # About Common Corpus
35
+
36
+ ## Sub-corpora
37
+
38
+ | Collection | Domain | Sources |
39
+ |----------------|--------------------------|-------------------------------------------------------------------------------------------|
40
+ | OpenGovernment | legal and administrative | [Finance Commons](https://huggingface.co/collections/PleIAs/finance-commons-66925e1095c7fa6e6828e26c) (e.g. SEC, WTO) and Legal Commons (e.g. Europarl, Caselaw Access Project) |
41
+ | OpenCulture | cultural heritage | public domain books and newspapers, Wikisource |
42
+ | OpenScience | academic | OpenAlex, French theses |
43
+ | OpenWeb | web text | [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), Stack Exchange |
44
+ | OpenSource | code | GitHub |
45
+
46
+ ## Summary Statistics
47
+
48
+ ### By Sub-corpus
49
+
50
+ ### By Language
51
+
52
+ ## Dataset Structure
53
+
54
+ <details >
55
+ <summary>Data Fields</summary>
56
+
57
+ * identifier: unique text identifier
58
+ * text: post-processed text
59
+ * char_count: number of UTF-8 characters in text
60
+ * file_name: original file path, organized by collection
61
+ * set_id: set id (1-10)
62
+ * subset_id: subset id (1-100)
63
+
64
+ </details >
65
+ <br />
66
+
67
+ # How to Use
68
+
69
+ ## Considerations for Using the Data
70
+
71
+ All data in Common Corpus are permissibly licensed and may be used for both commercial and non-commercial purposes.
72
+
73
+ The dataset is multilingual. The language text is included in the metadata, so data can be filtered by language. Additionally, some of the text data are historical. The year each text is written is included in the metadata, therefore it is possible to construct a dataset with a custom date cutoff if desired.
74
+
75
+ ### Discussion of Bias
76
+
77
+ Some of the dataset sources contain biased and toxic content, such as stereotypes about certain minoritized groups. We have removed texts which had high toxicity scores according to our toxicity classifier, [Celadon](https://huggingface.co/PleIAs/celadon), or which contain offensive terms and slurs. See our [preprint](https://arxiv.org/pdf/2410.22587) for more details.
78
+
79
+ ### Personal and Sensitive Information
80
+
81
+ We have attempted to remove personally identifiable information (PII). We primarily use [Microsoft Presidio](https://microsoft.github.io/presidio/), but make additional modifications to account for language- and country-specific considerations, such as European phone number formats.
82
+
83
+
84
+ ## Use Common Corpus
85
+
86
+ ```
87
+ from datasets import load_dataset
88
+ data = load_dataset('PleIAs/common_corpus')
89
+ ```
90
+
91
+
92
+ # Acknowledgements
93
+
94
+ The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise.
95
+
96
+ Corpus collection has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Occiglot, Eleuther AI, OpenLLM France, Allen AI).
97
+
98
+
99
+ <div style="text-align: center;">
100
+ <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/scaleway_logo.jpeg?token=GHSAT0AAAAAACZUTJMJVWBQHB5F4QQ7RPRIZZL3STA" style="width: 33%; margin: 0 auto; display: inline-block;"/>
101
+ <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/ministere_logo.png?token=GHSAT0AAAAAACZUTJMICO3MSWUJ43EQWG5QZZL3RFQ" style="width: 33%; margin: 0 auto; display: inline-block;"/>
102
+ <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/wikimedia_logo.png?token=GHSAT0AAAAAACZUTJMIIPAP4J7MKP6RSSWCZZL3TFA" style="width: 33%; margin: 0 auto; display: inline-block;"/>
103
+ <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/occiglot_logo.jpg?token=GHSAT0AAAAAACZUTJMI6JFGJ5XCQB2ORQJMZZL3S5A" style="width: 33%; margin: 0 auto; display: inline-block;"/>
104
+ </div>
105
+