Datasets:
Difference with fineweb-edu-dedup (smol-lm corpus)
Hi, sorry if this has been asked before, but do you know the difference between this version of FineWeb-Edu and the one proposed by the Hugging Face team? (https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus)
I am asking because both have been deduplicated but this one has 320M rows and the other has 190M
Hi! So smollm-corpus is a corpus of multiple different datasets, with fineweb-edu being only a subset. In terms of their deduplicated fineweb-edu subset of smollm being different from ours--I don't know for certain but can speculate. Our duplicate detection was essentially as strict as possible; if a single character differed between two rows (ex: an extra space, capitalization differences, etc.) we kept both rows. There are a variety of ways you can loosen the criteria for what counts as a duplicate, and if you do so then you will treat more things as duplicates and thus have fewer rows in your deduplicated result. I am assuming they used looser criteria than we did.
Makes sense. Shame they do not comment more about it. Thanks, will investigate more!