naijaweb / README.md
saheedniyi's picture
Update README.md
1ff7b8a verified
|
raw
history blame
3.44 kB
metadata
dataset_info:
  features:
    - name: 'Unnamed: 0'
      dtype: int64
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: link
      dtype: string
    - name: token_count
      dtype: int64
    - name: section
      dtype: string
    - name: domain
      dtype: string
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
    - name: language
      dtype: string
    - name: language_probability
      dtype: float64
  splits:
    - name: train
      num_bytes: 1106487193
      num_examples: 270137
  download_size: 653993961
  dataset_size: 1106487193
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - text-generation
language:
  - en
  - yo
  - ha
  - ig
tags:
  - finance
  - legal
  - music
  - art
  - medical
  - chemistry
  - biology
size_categories:
  - 100K<n<1M

🇳🇬 Naijaweb

Naijaweb is a dataset consist about 270,000+ documents (230 Million GPT2 tokens), webscraped from pages interested in by Nigerians.

Data Collection The data was colected by extracting 1,795,908 unique posts from 19 sections on Nairaland.com and about 1,289,195 outbond links were extracted from the posts. The web pages were then extracted using Trafilatura.

Data cleaning The data was then cleaned using datatrove, same library that was used to clean the recently released and high performing fineweb edu dataset. So itv isn't far fetched to say this dataset is of the same quality as the fineweb dataset Dataset cleaning procedure:

Flowchart of how it was cleaned

An example of a typical w of the dataset looks like


Data fields

How to load the dataset

Social Impact of Dataset

With the release of this dataset we aim to make model training more accessible to the machine learning community at large.

While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 🍷 FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.

Discussion of Biases

Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.

We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a “gold” source such as wikipedia or toxicity classifiers as these methods have been known to disproportionately remove content in specific dialects and overclassify as toxic text related to specific social identities, respectively.

Sections of the datasets

Citation