--- language: - ar license: mit size_categories: - 100K

Detect Egyptian Wikipedia Template-translated Articles

## Dataset Description: We release the heuristically filtered, manually processed, and automatically classified Egyptian Arabic Wikipedia articles dataset. This dataset was used to develop a **web-based detection system** to automatically identify the template-translated articles on the Egyptian Arabic Wikipedia edition. The system is called [**Egyptian Arabic Wikipedia Scanner**](https://egyptian-wikipedia-scanner.streamlit.app/) and is hosted on Hugging Face Spaces, here: [**SaiedAlshahrani/Detect-Egyptian-Wikipedia-Articles**](https://huggingface.co/spaces/SaiedAlshahrani/Egyptian-Wikipedia-Scanner). This dataset is introduced in a research paper titled "[***Leveraging Corpus Metadata to Detect Template-based Translation: An Exploratory Case Study of the Egyptian Arabic Wikipedia Edition***](https://aclanthology.org/2024.osact-1.4/)", which is **accepted** at [LREC-COLING 2024](https://lrec-coling-2024.org/): [The 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT6)](https://osact-lrec.github.io/), and is currently released under an MIT license. ## Dataset Sources: This Egyptian Arabic Wikipedia articles dataset was extracted from the complete [Wikipedia dumps](https://dumps.wikimedia.org/backup-index.html) of the Egyptian Arabic Wikipedia edition, downloaded on the 1st of January 2024, and processed using the [Gensim](https://radimrehurek.com/gensim/) Python library. ## Dataset Features: We utilized the Wikimedia [XTools API](https://www.mediawiki.org/wiki/XTools) to collect the metadata (dataset features) of the Egyptian Arabic Wikipedia articles. Specifically, we collected the following metadata/features for each article: **total edits**, **total editors**, **top editors**, **total bytes**, **total characters**, **total words**, **creator name**, and **creation date**. ## Dataset Subsets: 1. **Balanced**: A balanced subset of the dataset comprised 20K (10K for each class) and was split in the ratio of 80:20 for training and testing. This subset was filtered and processed using selected heuristic rules. 2. **Unbalanced**: An unbalanced subset of the dataset comprised 166K and was split in the ratio of 80:20 for training and testing. This subset is the rest of the filtered and processed articles using selected heuristic rules. 3. **Uncategorized**: Another unbalanced subset of the dataset comprised 569K and was split in the ratio of 80:20 for training and testing, but this was classified automatically using the `XGBoost` classifier trained using the balanced subset. ## Dataset Citations: Saied Alshahrani, Hesham Haroon, Ali Elfilali, Mariama Njie, and Jeanna Matthews. 2024. [Leveraging Corpus Metadata to Detect Template-based Translation: An Exploratory Case Study of the Egyptian Arabic Wikipedia Edition](https://arxiv.org/abs/2404.00565). *arXiv preprint arXiv:2404.00565*. Saied Alshahrani, Hesham Haroon, Ali Elfilali, Mariama Njie, and Jeanna Matthews. 2024. [Leveraging Corpus Metadata to Detect Template-based Translation: An Exploratory Case Study of the Egyptian Arabic Wikipedia Edition](https://aclanthology.org/2024.osact-1.4/). *In Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024*, pages 31–45, Torino, Italia. ELRA and ICCL.*