Pclanglais commited on
Commit
871bbe9
1 Parent(s): 53cec6a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -23,7 +23,7 @@ tags:
23
 
24
  # Common Corpus
25
 
26
- Common Corpus is the largest open and permissible licensed text dataset, comprising over 2 trillion tokens. It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more.
27
 
28
  Common Corpus differs from existing open datasets in that it is:
29
  * **Truly Open**: contains only data that is permissively licensed
@@ -33,7 +33,12 @@ Common Corpus differs from existing open datasets in that it is:
33
 
34
  # About Common Corpus
35
 
36
- ## Sub-corpora
 
 
 
 
 
37
 
38
  | Collection | Domain | Sources |
39
  |----------------|--------------------------|-------------------------------------------------------------------------------------------|
@@ -43,11 +48,7 @@ Common Corpus differs from existing open datasets in that it is:
43
  | OpenWeb | web text | [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), Stack Exchange |
44
  | OpenSource | code | GitHub |
45
 
46
- ## Summary Statistics
47
-
48
- ### By Sub-corpus
49
-
50
- ### By Language
51
 
52
  ## Dataset Structure
53
 
@@ -91,8 +92,6 @@ data = load_dataset('PleIAs/common_corpus')
91
 
92
  # Acknowledgements
93
 
94
- The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise.
95
-
96
  The corpus was stored and processed with the generous support of Jean Zay (Eviden, Idris), Nvidia Inception program, Nebius AI, Tracto AI and Mozilla. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise for the Wikipedia part. The collection of the corpus has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Eleuther AI, Allen AI, HuggingFace…).
97
 
98
  <div style="text-align: center;">
 
23
 
24
  # Common Corpus
25
 
26
+ Common Corpus is the largest open and permissible licensed text dataset, comprising over 2 trillion tokens (2,003,039,184,047 tokens). It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more.
27
 
28
  Common Corpus differs from existing open datasets in that it is:
29
  * **Truly Open**: contains only data that is permissively licensed
 
33
 
34
  # About Common Corpus
35
 
36
+ Common Corpus is made of five carefully curated collections:
37
+ * **OpenCulture**: our largest collection at 926,541,096,243 tokens, featuring public domain books, newspapers, and Wikisource content. We've developed innovative tools like OCROnos-Vintage to correct historical digitization errors, while implementing advanced toxicity filtering to ensure content meets modern ethical standards.
38
+ * **OpenGovernment**: 387,965,738,992 tokens of financial and legal documents, including Finance Commons (from sources like SEC and WTO) and Legal Commons (including Europarl and Caselaw Access Project), providing enterprise-grade training data from regulatory bodies and administrative sources.
39
+ * **OpenSource**: 334,658,896,533 tokens of high-quality code in open source from GitHub, filtered using ArmoRM to ensure only the top 80% of submissions by quality rating are included.
40
+ * **OpenScience**: 221,798,136,564 tokens of academic content from Open Alex and other open science reposiories, processed using vision-language models to preserve crucial document structure and formatting.
41
+ * **OpenWeb**: 132,075,315,715 tokens from Wikipedia, YouTube Commons and other websites available under permissible licenses like Stack-Exchange.
42
 
43
  | Collection | Domain | Sources |
44
  |----------------|--------------------------|-------------------------------------------------------------------------------------------|
 
48
  | OpenWeb | web text | [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), Stack Exchange |
49
  | OpenSource | code | GitHub |
50
 
51
+ We will accompany the dataset release with a comprehensive technical report detailing our methodologies and data sources will accompany the release, ensuring full transparency and reproducibility. We will release the individual sub-corpora in coming weeks for more fine-grained auditability for to expand uses
 
 
 
 
52
 
53
  ## Dataset Structure
54
 
 
92
 
93
  # Acknowledgements
94
 
 
 
95
  The corpus was stored and processed with the generous support of Jean Zay (Eviden, Idris), Nvidia Inception program, Nebius AI, Tracto AI and Mozilla. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise for the Wikipedia part. The collection of the corpus has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Eleuther AI, Allen AI, HuggingFace…).
96
 
97
  <div style="text-align: center;">