Shiksha: A Technical Domain focused Translation Dataset and Model for Indian Languages
Abstract
Neural Machine Translation (NMT) models are typically trained on datasets with limited exposure to Scientific, Technical and Educational domains. Translation models thus, in general, struggle with tasks that involve scientific understanding or technical jargon. Their performance is found to be even worse for low-resource Indian languages. Finding a translation dataset that tends to these domains in particular, poses a difficult challenge. In this paper, we address this by creating a multilingual parallel corpus containing more than 2.8 million rows of English-to-Indic and Indic-to-Indic high-quality translation pairs across 8 Indian languages. We achieve this by bitext mining human-translated transcriptions of NPTEL video lectures. We also finetune and evaluate NMT models using this corpus and surpass all other publicly available models at in-domain tasks. We also demonstrate the potential for generalizing to out-of-domain translation tasks by improving the baseline by over 2 BLEU on average for these Indian languages on the Flores+ benchmark. We are pleased to release our model and dataset via this link: https://huggingface.co/SPRINGLab.
Community
Releasing Shiksha. A Technical Domain focused Translation Dataset and Model for Indian Languages
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Can General-Purpose Large Language Models Generalize to English-Thai Machine Translation ? (2024)
- SPRING Lab IITM's Submission to Low Resource Indic Language Translation Shared Task (2024)
- Marco-LLM: Bridging Languages via Massive Multilingual Training for Cross-Lingual Enhancement (2024)
- Maya: An Instruction Finetuned Multilingual Multimodal Model (2024)
- From Priest to Doctor: Domain Adaptaion for Low-Resource Neural Machine Translation (2024)
- BhasaAnuvaad: A Speech Translation Dataset for 13 Indian Languages (2024)
- NLIP-Lab-IITH Multilingual MT System for WAT24 MT Shared Task (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper