metadata
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: language_script
dtype: string
- name: minhash_cluster_size
dtype: int64
- name: top_langs
dtype: string
splits:
- name: train
num_bytes: 13252144546
num_examples: 2293647
download_size: 6366393037
dataset_size: 13252144546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Small Arabic FineWeb2 Sample Dataset
A small extract (2.3 million rows compared to 58 million in the original:
FineWeb2 arb_Arab subset).
First, I filtered for items with 95% or more Arabic (language_score is not reliable),
then I randomly sampled the 2.3M from the result.
See this post
Code:
from datasets import load_dataset
import pandas as pd
from pprint import pprint
ds = load_dataset("akhooli/fineweb2_ar_24_sample")
import random
max_n = len(ds['train'])
index = random.randint(0,max_n)
pprint(ds['train'][index]['text'])