Cleaning method

#2
by VIGNERON - opened
Wikimedia org

Hi y'all,

Thanks for this dataset. I see that « Wikisource dataset containing cleaned articles of all languages. » but sadly the cleaning is not a good method.

https://ca.wikisource.org/wiki/Comunicat%20de%20Berl%C3%ADn is not representative of how most text are structured in Wikisources, it's an old way of displaying texts than is rare (and become a bit more rare everyday). Most texts are like this one: https://ca.wikisource.org/wiki/La_caiguda_de_Morella and if you take the source code and « strip markdown and unwanted sections » it leaves nothing...

Proposed solutions :

  • process the rendered content and not the source code (not sure it's in the regular dumps tho :/ maybe via Wikimedia entreprise ?)
  • process the pages in the Page namespace (beware, the namespace id can be different on some Wikisource :/ )

Also, maybe remove the disambig pages (which by definitation has no real content).

Hi @VIGNERON ,

Thanks for pointing out this issue. The parsing/cleaning procedure we use is the same as for "wikimedia/wikipedia", and this comes from the original (now deprecated) "wikipedia". See: https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L1043

  • Note that as a first simple step, we decided to remove all tags: templates, files, category-prefix,...

Specifically for the case of Wikisource, we discussed long ago with @isaacj (Isaac Johnson, research scientist at Wikimedia). See: https://github.com/huggingface/datasets/pull/3418

  • I see I maybe misunderstood, but I though that the more representative (or at least, the long-term target) was indeed the contrary, i.e. sources containing text; and that sources containing only a template would eventually be converted to text.
  • You say it is the other way. That will make our approach have less and less articles with text in the future...

Indeed, this is a known issue and we are discussing about potential solutions (pros/cons) to improve the quality of the data for future versions.

Wikimedia org

Hi @albertvillanova ,

Indeed there was a misunderstanding and indeed, it it the opposite: pages containing directly text are (slowly) converted to pages "transcluding" (ie. calling other pages via the "pages" tag).
Some Wikisources are already at 100% (or almost) of texts using the "new" method (Bangla, Breton, Hindi, French, Polish are all over 99 % of transclusion, 23 Wikisources are over 50 %, see the red parts on the graphs at the bottom here: https://phetools.toolforge.org/transclusions.html).

Removing templates and code is not a bad idea and is irrelevant here (as the "pages" tag doesn't contain anything meaningful, just a link to the scan of the text).

Good to know that you're in touch with people that can provide help, I'm eager to see a truly clean result! and tell me if I can help in any way.

Cheers,

Wikimedia org

This is excellent feedback @VIGNERON and saves me a lot of time so many thanks. We have both options available that you proposed:

  • The Enterprise HTML dumps are available for Wikisource -- e.g., see https://dumps.wikimedia.org/other/enterprise_html/runs/20231201/ and you'll see <lang>wikisource dumps -- so we could work with them and the main namespace as suggested.
  • That makes sense regarding the Page namespace as the preferred solution for the transcribed text. I generated the list of relevant namespace IDs below if we'd like to give it a try.

The quick fix is remaining with wikitext but switching to Page namespace using my list of prefixes below and then retaining the standard parsing logic to remove templates etc. Similar to my reasoning in https://huggingface.co/datasets/wikimedia/wikipedia/discussions/51#656f7d12cb4bff8b106015a2, I presume the long-term best solution for this dataset is also the HTML dumps. That will take more work to implement but I can try to do a sprint on it in January and see where we get. I wasn't present in the above-mentioned meeting with Wikimedia Enterprise so I'll also defer if there's details from that that change the thinking.

One other thing: the first and last sentence on a Wikisource Page are often incomplete (this is true of both a wikitext and a HTML solution). How concerned are we about this? Leave as is? Should there be additional logic to attempt to concatenate sequential pages? Add a layer of sentence tokenization and remove first sentence if it doesn't begin with a capital letter (will that generalize to all languages?) and last sentence if it doesn't end with a full stop? I assume this has been encountered in other book/document/transcription-related datasets?

Example: https://en.wikisource.org/wiki/Page%3ALittle_Elephant's_Christmas%2C_story_(IA_littleelephantsc00wash).pdf/12

# Page namespace prefixes. For context, see: https://en.wikisource.org/wiki/Help:Namespaces
# Code: https://public-paws.wmcloud.org/User:Isaac%20(WMF)/wikisource-page-namespaces.ipynb
{'ar': 104,
 'as': 104,
 'az': 250,
 'ban': 250,
 'be': 104,
 'bg': 250,
 'bn': 104,
 'br': 102,
 'bs': 250,
 'ca': 102,
 'cs': 250,
 'cy': 104,
 'da': 104,
 'de': 102,
 'el': 100,
 'en': 104,
 'eo': 104,
 'es': 102,
 'et': 102,
 'eu': 250,
 'fa': 104,
 'fi': 250,
 'fo': 250,
 'fr': 104,
 'gl': 250,
 'gu': 104,
 'he': 104,
 'hi': 250,
 'hr': 102,
 'hu': 104,
 'hy': 104,
 'id': 104,
 'is': 250,
 'it': 108,
 'ja': 250,
 'jv': 250,
 'kn': 104,
 'ko': 250,
 'la': 104,
 'li': 250,
 'lij': 250,
 'lt': 250,
 'mk': 250,
 'ml': 106,
 'mr': 104,
 'nan': 250,
 'nap': 250,
 'nl': 104,
 'no': 104,
 'or': 250,
 'pa': 250,
 'pl': 100,
 'pms': 102,
 'pt': 106,
 'ro': 104,
 'ru': 104,
 'sa': 104,
 'sah': 250,
 'sk': 250,
 'sl': 100,
 'sr': 250,
 'su': 250,
 'sv': 104,
 'ta': 250,
 'te': 104,
 'th': 250,
 'tr': 250,
 'uk': 250,
 'vec': 102,
 'vi': 104,
 'wa': 250,
 'yi': 250,
 'zh': 104}

Sign up or log in to comment