Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Code elements inside web page are badly processed for FineWeb

#2
by melodyinray - opened

it seems code elements inside web page are badly processed by trafilatura.
For example:

url: https://dba.stackexchange.com/questions/27816/applying-user-defined-fields-to-arbitrary-entities 
text: Currently we have an old (rather crude) system that has user-defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:
This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story.
...

By the url to the original page :
截屏2024-11-11 下午8.56.19.png

Code blocks are removed incorrectly, which could be hard to restore only if we have the original html.

OpenCoder org

@melodyinray Thanks for your attention! For this data, we didn’t do any HTML-to-text extraction. Instead, we directly pulled FineWeb, performed three rounds of recall on it, and arrived at this current dataset. The removal of code blocks might be an error that occurred during the FineWeb parsing process :-p

OpenCoder org

@melodyinray You may use the open source repo https://github.com/SivilTaram/code-html-to-markdown to parse html to markdown (a code friendly processing library)

@melodyinray You may use the open source repo https://github.com/SivilTaram/code-html-to-markdown to parse html to markdown (a code friendly processing library)

Thanks for your remind. If we have the original html from commoncrawl, that'll be easy.

crazycth changed discussion status to closed

Sign up or log in to comment