Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Evaluation of Japanese Dataset in FineWeb2-HQ

#2
by hotchpotch - opened

Hello. I'm the creator of fineweb-2-edu-japanese. I received a message from Martin Jaggi and decided to briefly evaluate the Japanese portion of the FineWeb2-HQ dataset.

For my assessment, I used my hotchpotch/fineweb-2-edu-japanese-classifier model. This classifier was designed to identify "educationally suitable content" with scores of 2.5 or higher, which allowed me to filter the original 340M FineWeb-2 Japanese dataset down to 120M documents (approximately one-third) for the fineweb-2-edu-japanese dataset.

I first analyzed the educational quality scores for the first 1 million samples of the original FineWeb-2:

Score Distribution (β˜… = approximately 10 samples):
--------------------------------------------------
 0.0- 0.4: β˜…β˜… (22231)
 0.4- 0.8: β˜…β˜… (21533)
 0.8- 1.2: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (128617)
 1.2- 1.6: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (193715)
 1.6- 2.0: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (170045)
 2.0- 2.4: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (143213)
 2.4- 2.8: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (125416)
 2.8- 3.2: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (125053)
 3.2- 3.6: β˜…β˜…β˜…β˜…β˜…β˜…β˜… (70177)
 3.6- 4.0:  (   0)
--------------------------------------------------
score avg:  1.969705183639586
score median:  1.90625
score min:  -0.30078125
score max:  3.34375
score p10:  1.03125
score p90:  3.109375
score >= 2.5:  0.290381
---------------------------

These results show that 29% of the original dataset scored 2.5 or higher.

Next, I evaluated the first 1 million samples of the FineWeb2-HQ Japanese dataset:

Score Distribution (β˜… = approximately 10 samples):
--------------------------------------------------
 0.0- 0.4:  (3219)
 0.4- 0.8:  (8752)
 0.8- 1.2: β˜…β˜… (32908)
 1.2- 1.6: β˜…β˜…β˜…β˜…β˜… (68605)
 1.6- 2.0: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (109526)
 2.0- 2.4: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (143390)
 2.4- 2.8: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (165319)
 2.8- 3.2: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (229894)
 3.2- 3.6: β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜…β˜… (238387)
 3.6- 4.0:  (   0)
--------------------------------------------------
score avg:  2.5599524654312136
score median:  2.734375
score min:  -0.337890625
score max:  3.34375
score p10:  1.53125
score p90:  3.3125
score >= 2.5:  0.597561
---------------------------

The results are impressive! FineWeb2-HQ shows a significant reduction in low-scoring content and nearly doubles the proportion of high-quality educational content (β‰₯2.5) to 60%.

This analysis confirms that FineWeb2-HQ successfully achieved its goal of creating a higher-quality dataset. The average score increased from 1.97 to 2.56, and the median score improved from 1.91 to 2.73, indicating a substantial quality enhancement.

One observation to consider is that since FineWeb2-HQ selected only the top 10% of documents (resulting in 34M Japanese documents), there might be valuable educational content that wasn't captured. With approximately 60% of these 34M documents (about 20M) being educationally valuable according to my classifier, compared to the 120M educational documents in fineweb-2-edu-japanese, there could potentially be around 100M educational documents not included in the HQ dataset.

I want to emphasize that the FineWeb2-HQ team has accomplished something remarkable - creating a high-quality multilingual dataset that enables 6x faster training while matching or exceeding benchmark performance. The achievement of creating a compact, multilingual, high-quality dataset is truly impressive.

At the same time, my analysis suggests that, at least for the Japanese dataset, there may be room to improve the classification models used for filtering. While the current approach has clearly increased quality, the potential to capture more of those 100M valuable documents I identified indicates there might be opportunities to refine the filtering techniques in future iterations. This observation is based solely on my own classifier's metrics, which may have its own limitations, but it points to interesting possibilities for future work in this area.

Sign up or log in to comment