{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:30:24.027416Z" }, "title": "Punjabi to English Bidirectional NMT System", "authors": [ { "first": "Kamal", "middle": [], "last": "Deep", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ajit", "middle": [], "last": "Kumar", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Vishal", "middle": [], "last": "Goyal", "suffix": "", "affiliation": {}, "email": "vishal.pup@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Machine Translation is ongoing research for last few decades. Today, Corpus-based Machine Translation systems are very popular. Statistical Machine Translation and Neural Machine Translation are based on the parallel corpus. In this research, the Punjabi to English Bidirectional Neural Machine Translation system is developed. To improve the accuracy of the Neural Machine Translation system, Word Embedding and Byte Pair Encoding is used. The claimed BLEU score is 38.30 for Punjabi to English Neural Machine Translation system and 36.96 for English to Punjabi Neural Machine Translation system.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Machine Translation is ongoing research for last few decades. Today, Corpus-based Machine Translation systems are very popular. Statistical Machine Translation and Neural Machine Translation are based on the parallel corpus. In this research, the Punjabi to English Bidirectional Neural Machine Translation system is developed. To improve the accuracy of the Neural Machine Translation system, Word Embedding and Byte Pair Encoding is used. The claimed BLEU score is 38.30 for Punjabi to English Neural Machine Translation system and 36.96 for English to Punjabi Neural Machine Translation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine Translation (MT) is a popular topic in Natural Language Processing (NLP). MT system takes the source language text as input and translates it into target-language text (Banik et al., 2019) . Various approaches have been developed for MT systems, for example, Rulebased, Example-based, Statistical-based, Neural Network-based, and Hybrid-based (Mall and Jaiswal, 2018) . Among all these approaches, Statistical-based and Neural Network-based approaches are most popular in the community of MT researchers. Statistical and Neural Network-based approaches are data-driven (Mahata et al., 2018) . Both need a parallel corpus for training and validation (Khan Jadoon et al., 2017) . Due to this, the accuracy of these systems is higher than the Rule-based system.", "cite_spans": [ { "start": 176, "end": 196, "text": "(Banik et al., 2019)", "ref_id": "BIBREF1" }, { "start": 351, "end": 375, "text": "(Mall and Jaiswal, 2018)", "ref_id": "BIBREF6" }, { "start": 577, "end": 598, "text": "(Mahata et al., 2018)", "ref_id": "BIBREF5" }, { "start": 657, "end": 683, "text": "(Khan Jadoon et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Neural Machine Translation (NMT) is a trending approach these days (Pathak et al., 2018) . Deep learning is a fast expanding approach to machine learning and has demonstrated excellent performance when applied to a range of tasks such as speech generation, DNA prediction, NLP, image recognition, and MT, etc. In this NLP tools demonstration, Punjabi to English bidirectional NMT system is showcased.", "cite_spans": [ { "start": 71, "end": 92, "text": "(Pathak et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The NMT system is based on the sequence to sequence architecture. The sequence to sequence architecture converts one sequence into another sequence (Sutskever et al., 2011) . For example: in MT sequence to sequence, architecture converts source text (Punjabi) sequence to target text (English) sequence. The NMT system uses the encoder and decoder to convert input text into a fixed-size vector and generates output from this encoded vector. This Encoder-decoder framework is based on the Recurrent Neural Network (RNN) (Wo\u0142k and Marasek, 2015) (Goyal and Misra Sharma, 2019) . This basic encoder-decoder framework is suitable for short sentences only and does not work well in the case of long sentences. The use of attention mechanisms with the encoderdecoder framework is a solution for that. In the attention mechanism, attention is paid to subparts of sentences during translation.", "cite_spans": [ { "start": 148, "end": 172, "text": "(Sutskever et al., 2011)", "ref_id": "BIBREF9" }, { "start": 520, "end": 544, "text": "(Wo\u0142k and Marasek, 2015)", "ref_id": "BIBREF10" }, { "start": 545, "end": 575, "text": "(Goyal and Misra Sharma, 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For this demonstration, the Punjabi-English corpus is prepared by collecting from the various online resources. Different processing steps have been done on the corpus to make it clean and useful for the training. The parallel corpus of 259623 sentences is used for training, development, and testing the system. This parallel corpus is divided into training (256787 sentences), development (1418 sentences), and testing (1418 sentences) sets after shuffling the whole corpus using python code.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Development", "sec_num": "2" }, { "text": "Pre-processing is the primary step in the development of the MT system. Various steps have been performed in the pre-processing phase: Tokenization of Punjabi and English text, lowercasing of English text, removing of contraction in English text and cleaning of long sentences (# of tokens more than 40).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing of Corpus", "sec_num": "3" }, { "text": "To develop the Punjabi to English Bidirectional NMT system, the OpenNMT toolkit (Klein et al., 2017) is used.", "cite_spans": [ { "start": 80, "end": 100, "text": "(Klein et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "OpenNMT is an open-source ecosystem for neural sequence learning and NMT. Two models are developed: one for translation of Punjabi to English and the second for translation of English to Punjabi. The Punjabi vocabulary size of 75332 words and English vocabulary size of 93458 words is developed in the pre-processing step of training the NMT system. For all models, the batch size of 32 and 25 epochs for training is fixed. For the encoder, BiLSTM is used, and LSTM is used for the decoder. The number of hidden layers is set to four in both encode and decoder. The number of units is set to 500 cells for each layer. BPE (Banar et al., 2020) is used to reduce the vocabulary size as the NMT suffers from the fixed vocabulary size. The Punjabi vocabulary size after BPE is 29500 words and English vocabulary size after BPE is 28879 words. \"General\" is used as an attention function.", "cite_spans": [ { "start": 622, "end": 642, "text": "(Banar et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "By using Python and Flask, a web-based interface is also developed for Punjabi to English bidirectional NMT system. This interface uses the two models at the backend to translate the Punjabi text to English Text and to translate English text to Punjabi text. The user enters input in the given text area and selects the appropriate NMT model from the dropdown and then clicks on the submit button. The input is pre-processed, and then the NMT model translates the text into the target text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "BLEU score Punjabi to English NMT model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "English to Punjabi NMT model 36.96 Table 1 : BLEU score of both models", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 42, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "38.30", "sec_num": null }, { "text": "Both proposed models are evaluated by using the BLEU score (Snover et al., 2006) . The BLEU score obtained at all epochs is recorded in a table for both models. ", "cite_spans": [ { "start": 59, "end": 80, "text": "(Snover et al., 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Character-level Transformerbased Neural Machine Translation", "authors": [ { "first": "Nikolay", "middle": [], "last": "Banar", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Kestemont", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.11239" ] }, "num": null, "urls": [], "raw_text": "Nikolay Banar, Walter Daelemans, and Mike Kestemont. 2020. Character-level Transformer- based Neural Machine Translation, arXiv: 2005.11239.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Statistical-based system combination approach to gain advantages over different machine translation systems", "authors": [ { "first": "Debajyoty", "middle": [], "last": "Banik", "suffix": "" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Siddhartha", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2019, "venue": "Heliyon", "volume": "5", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debajyoty Banik, Asif Ekbal, Pushpak Bhattacharyya, Siddhartha Bhattacharyya, and Jan Platos. 2019. Statistical-based system combination approach to gain advantages over different machine translation systems. Heliyon, 5(9):e02504.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "LTRC-MT Simple & Effective Hindi-English Neural Machine Translation Systems at WAT 2019", "authors": [ { "first": "Vikrant", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Dipti Misra", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 6th Workshop on Asian Translation", "volume": "", "issue": "", "pages": "137--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikrant Goyal and Dipti Misra Sharma. 2019. LTRC-MT Simple & Effective Hindi-English Neural Machine Translation Systems at WAT 2019. In Proceedings of the 6th Workshop on Asian Translation,Hong Kong, China, pages 137-140.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical machine translation of Indian languages: a survey", "authors": [ { "first": "Waqas", "middle": [], "last": "Nadeem Khan Jadoon", "suffix": "" }, { "first": "", "middle": [], "last": "Anwar", "suffix": "" } ], "year": 2017, "venue": "Neural Computing and Applications", "volume": "31", "issue": "7", "pages": "2455--2467", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nadeem Khan Jadoon, Waqas Anwar, Usama Ijaz Bajwa, and Farooq Ahmad. 2017. Statistical machine translation of Indian languages: a survey. Neural Computing and Applications, 31(7):2455- 2467.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "OpenNMT: Open-source Toolkit for Neural Machine Translation. ACL 2017 -55th Annual Meeting of the Association for Computational Linguistics, Proceedings of System Demonstrations", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Josep", "middle": [], "last": "Crego", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander M. Rush, Josep Crego, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source Toolkit for Neural Machine Translation. ACL 2017 -55th Annual Meeting of the Association for Computational Linguistics, Proceedings of System Demonstrations:67-72.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "SMT vs NMT: A Comparison over Hindi & Bengali Simple Sentences", "authors": [ { "first": "Soumil", "middle": [], "last": "Sainik Kumar Mahata", "suffix": "" }, { "first": "Dipankar", "middle": [], "last": "Mandal", "suffix": "" }, { "first": "Sivaji", "middle": [], "last": "Das", "suffix": "" }, { "first": "", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2018, "venue": "International Conference on Natural Language Processing, number December", "volume": "", "issue": "", "pages": "175--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainik Kumar Mahata, Soumil Mandal, Dipankar Das, and Sivaji Bandyopadhyay. 2018. SMT vs NMT: A Comparison over Hindi & Bengali Simple Sentences. In International Conference on Natural Language Processing, number December, pages 175-182.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Survey: Machine Translation for Indian Language", "authors": [ { "first": "Shachi", "middle": [], "last": "Mall", "suffix": "" }, { "first": "Umesh", "middle": [], "last": "Chandra Jaiswal", "suffix": "" } ], "year": 2018, "venue": "International Journal of Applied Engineering Research", "volume": "13", "issue": "1", "pages": "202--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shachi Mall and Umesh Chandra Jaiswal. 2018. Survey: Machine Translation for Indian Language. International Journal of Applied Engineering Research, 13(1):202-209.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "English-Mizo Machine Translation using neural and statistical approaches", "authors": [ { "first": "Amarnath", "middle": [], "last": "Pathak", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Pakray", "suffix": "" }, { "first": "Jereemi", "middle": [], "last": "Bentham", "suffix": "" } ], "year": 2018, "venue": "Neural Computing and Applications", "volume": "31", "issue": "11", "pages": "7615--7631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amarnath Pathak, Partha Pakray, and Jereemi Bentham. 2018. English-Mizo Machine Translation using neural and statistical approaches. Neural Computing and Applications, 31(11):7615-7631.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "AMTA 2006 -Proceedings of the 7th Conference of the Association for Machine Translation of the Americas: Visions for the Future of Machine Translation", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. AMTA 2006 -Proceedings of the 7th Conference of the Association for Machine Translation of the Americas: Visions for the Future of Machine Translation:223-231.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Generating Text with Recurrent Neural Networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "James", "middle": [], "last": "Martens", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning", "volume": "131", "issue": "", "pages": "1017--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating Text with Recurrent Neural Networks. Proceedings of the 28th International Conference on Machine Learning, 131(1):1017-1024.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Neural-based Machine Translation for Medical Text Domain", "authors": [ { "first": "Krzysztof", "middle": [], "last": "Wo\u0142k", "suffix": "" }, { "first": "Krzysztof", "middle": [], "last": "Marasek", "suffix": "" } ], "year": 2015, "venue": "Based on European Medicines Agency Leaflet Texts. International Conference on Project MANagement", "volume": "64", "issue": "", "pages": "2--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krzysztof Wo\u0142k and Krzysztof Marasek. 2015. Neural-based Machine Translation for Medical Text Domain. Based on European Medicines Agency Leaflet Texts. International Conference on Project MANagement, 64:2-9.", "links": null } }, "ref_entries": { "TABREF0": { "text": "shows the BLEU score of both models. The best BLEU sore claimed is 38.30 for Punjabi to English Neural Machine Translation system and 36.96 for English to Punjabi Neural Machine Translation system.", "type_str": "table", "html": null, "content": "