saheedniyi commited on
Commit
1ff7b8a
1 Parent(s): ea27931

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -1
README.md CHANGED
@@ -52,4 +52,44 @@ tags:
52
  - biology
53
  size_categories:
54
  - 100K<n<1M
55
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  - biology
53
  size_categories:
54
  - 100K<n<1M
55
+ ---
56
+
57
+ 🇳🇬 Naijaweb
58
+
59
+ Naijaweb is a dataset consist about 270,000+ documents (230 Million GPT2 tokens), webscraped from pages interested in by Nigerians.
60
+
61
+ Data Collection
62
+ The data was colected by extracting 1,795,908 unique posts from 19 sections on Nairaland.com and about 1,289,195 outbond links were extracted from the posts.
63
+ The web pages were then extracted using Trafilatura.
64
+
65
+ Data cleaning
66
+ The data was then cleaned using datatrove, same library that was used to clean the recently released and high performing fineweb edu dataset.
67
+ So itv isn't far fetched to say this dataset is of the same quality as the fineweb dataset
68
+ Dataset cleaning procedure:
69
+
70
+ #### Flowchart of how it was cleaned
71
+
72
+ An example of a typical w of the dataset looks like
73
+ ```
74
+ ```
75
+
76
+ ### Data fields
77
+
78
+ ## How to load the dataset
79
+
80
+ ## Social Impact of Dataset
81
+ With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
82
+
83
+ While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 🍷 FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
84
+
85
+
86
+ ### Discussion of Biases
87
+ Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
88
+
89
+ We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a “gold” source such as wikipedia or toxicity classifiers as these methods have been known to disproportionately remove content in specific dialects and overclassify as toxic text related to specific social identities, respectively.
90
+
91
+
92
+ ### Sections of the datasets
93
+
94
+
95
+ ### Citation