File size: 1,414 Bytes
4711b6d 78b94ad |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: language_script
dtype: string
- name: minhash_cluster_size
dtype: int64
- name: top_langs
dtype: string
splits:
- name: train
num_bytes: 13252144546
num_examples: 2293647
download_size: 6366393037
dataset_size: 13252144546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Small Arabic FineWeb2 Sample Dataset
A small extract (2.3 million rows compared to 58 million in the original:
FineWeb2 [arb_Arab subset](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2/viewer/arb_Arab)).
First, I filtered for items with 95% or more Arabic (language_score is not reliable),
then I randomly sampled the 2.3M from the result.
See [this post](https://www.linkedin.com/posts/akhooli_a-small-arabic-fineweb2-sample-dataset-activity-7283099806003060736-g5cq)
Code:
```python
from datasets import load_dataset
import pandas as pd
from pprint import pprint
ds = load_dataset("akhooli/fineweb2_ar_24_sample")
import random
max_n = len(ds['train'])
index = random.randint(0,max_n)
pprint(ds['train'][index]['text'])
``` |