dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: id
dtype: string
- name: link
dtype: string
- name: token_count
dtype: int64
- name: section
dtype: string
- name: domain
dtype: string
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: language
dtype: string
- name: language_probability
dtype: float64
splits:
- name: train
num_bytes: 1106487193
num_examples: 270137
download_size: 653993961
dataset_size: 1106487193
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text-generation
language:
- en
- yo
- ha
- ig
tags:
- finance
- legal
- music
- art
- medical
- chemistry
- biology
size_categories:
- 100K<n<1M
Naijaweb Dataset
Naijaweb is a dataset that contains over 270,000+ documents, totaling approximately 230 million GPT-2 tokens. The data was web scraped from web pages popular among Nigerians, providing a rich resource for modeling Nigerian linguistic and cultural contexts.
Dataset Summary
Features | Data Types |
---|---|
Unnamed: 0 | int64 |
text | string |
id | string |
link | string |
token_count | int64 |
section | string |
domain | string |
score | float64 |
int_score | int64 |
language | string |
language_probability | float64 |
Data Collection
The dataset was collected from Nairaland.com, extracting 1,795,908 unique posts from 19 different sections of the site. Additionally, 1,289,195 outbound links were extracted from these posts. The content of these web pages was extracted using Trafilatura, a popular library for web scraping and content extraction.
Data Cleaning
The cleaning process was conducted using Datatrove, the same library employed in cleaning the FineWeb dataset, which is known for its high quality. The data cleaning process involved multiple stages of deduplication, filtering, and normalization to ensure the dataset's quality matches that of other high-performing datasets.
Data Cleaning Procedure:
- Step 1: Remove duplicate posts and links.
- Step 2: Filter out non-Nigerian context posts based on domain analysis.
- Step 3: Normalize textual content, removing HTML artifacts and irrelevant metadata.
- Step 4: Language detection and correction based on predicted language probabilities.
Example Entry
Each data point contains the following fields:
Unnamed: 0
: an index columntext
: the main body of the post or web pageid
: unique identifier for each documentlink
: the original URL of the source contenttoken_count
: the number of tokens in thetext
fieldsection
: the Nairaland section where the post was founddomain
: the domain of the outbound linkscore
: a float representing the content's relevance or qualityint_score
: an integer representation ofscore
language
: detected language of the text (e.g.,en
,yo
,ha
,ig
)language_probability
: the confidence score of the language detection algorithm
Data Splits
- Training Split: 270,137 examples (620MB in size)
How to Load the Dataset
To load the dataset using Hugging Face's datasets
library:
from datasets import load_dataset
dataset = load_dataset("saheedniyi/naijaweb")
Social Impact
Naijaweb was created to make Nigerian web data more accessible, providing researchers and developers with a dataset rich in Nigerian contexts across various domains such as Politics, Education, Business, and Health.
Bias and Ethical Considerations
Since the data is collected from publicly available web pages, inherent biases present in the sources may be reflected in the dataset. These biases can manifest in areas such as language, ideology, or topic representation. Users should be mindful of these potential biases when developing models, especially for sensitive areas like legal or medical information.
Sections of the Dataset
The dataset comprises content from 19 different sections of Nairaland.com, covering topics such as Politics, Education, Business, and Health.