{ "paper_id": "I11-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:31:57.641698Z" }, "title": "Comparing Two Techniques for Learning Transliteration Models Using a Parallel Corpus", "authors": [ { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "sajjad@ims.uni-stuttgart.de" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "durrani@ims.uni-stuttgart.de" }, { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "schmid@ims.uni-stuttgart.de" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "fraser@ims.uni-stuttgart.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We compare the use of an unsupervised transliteration mining method and a rulebased method to automatically extract lists of transliteration word pairs from a parallel corpus of Hindi/Urdu. We build joint source channel models on the automatically aligned orthographic transliteration units of the automatically extracted lists of transliteration pairs resulting in two transliteration systems. We compare our systems with three transliteration systems available on the web, and show that our systems have better performance. We perform an extensive analysis of the results of using both methods and show evidence that the unsupervised transliteration mining method is superior for applications requiring high recall transliteration lists, while the rule-based method is useful for obtaining high precision lists.", "pdf_parse": { "paper_id": "I11-1015", "_pdf_hash": "", "abstract": [ { "text": "We compare the use of an unsupervised transliteration mining method and a rulebased method to automatically extract lists of transliteration word pairs from a parallel corpus of Hindi/Urdu. We build joint source channel models on the automatically aligned orthographic transliteration units of the automatically extracted lists of transliteration pairs resulting in two transliteration systems. We compare our systems with three transliteration systems available on the web, and show that our systems have better performance. We perform an extensive analysis of the results of using both methods and show evidence that the unsupervised transliteration mining method is superior for applications requiring high recall transliteration lists, while the rule-based method is useful for obtaining high precision lists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Urdu and Hindi are closely related languages which have a similar phonological, semantic and syntactic structure. Hindi is derived from Sanskrit and Urdu is a mixture of Persian, Arabic, Turkish and Sanskrit. Both share closed class vocabulary which they inherit from Sanskrit. They differ however in the open class vocabulary and in the writing script used. Hindi is written in Devanagari script and borrows most of the open class vocabulary from Sanskrit. Urdu is written in Perso-Arabic script and borrows most of the open class vocabulary from Persian, Arabic, Turkish and Sanskrit. Both languages have lived together for centuries and now share a large part of their vocabulary with each other. In an initial study on a small parallel corpus, we found that both languages share approximately 82% (tokens) and 62% (types) of the vocabulary. Transliterating overlapping words will help to bridge the scripting gap between Hindi and Urdu. The remaining words must be converted into the other language with a bilingual dictionary which is beyond the scope of this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, the term transliteration pair refers to a word pair where the words are transliterations of each other and the term transliteration unit refers to a character pair where the characters are transliterations of each other. We are interested in building joint source channel models for transliteration. Because we do not have a list of transliteration pairs to use as training data in building such a transliteration model, we use two methods to extract the list of transliteration pairs from a parallel corpus of Hindi/Urdu. The first method uses the transliteration mining algorithm of Sajjad et al. (2011) to automatically extract transliteration pairs. This approach does not use any language specific knowledge. The second method uses handcrafted transliteration rules specific to the mapping between Hindi and Urdu to extract transliteration pairs. We automatically align the two lists of extracted transliteration pairs at the character level and learn two transliteration models. We compare the results with three other transliteration systems. Both of our transliteration systems perform better than the other systems.", "cite_spans": [ { "start": 600, "end": 620, "text": "Sajjad et al. (2011)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The 1-best output of the transliteration system built on the list extracted using the rule-based method is better than the 1-best output of the system built on the automatically extracted list. The rule-based extraction method is focused on obtaining a high precision list as compared to the automatic method which obtains a higher recall list. The 10-best and 20-best output of the transliteration system built on the automatically extracted list is better than the N-best outputs of the system built on the list extracted using the rule-based method. The wide coverage of transliteration units in the automatically extracted list helps the transliteration system to produce difficult transliterations which are hard to learn using the rulebased list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The transliteration task between Hindi and Urdu is non-trivial. The missing short vowels in the writing of Urdu and a missing short vowel in the writing of Hindi are a particular problem, and we identify other areas of difficulty. We provide a detailed error analysis to account for the complexities in Hindi to Urdu transliteration motivated by linguistic phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. Previous work on transliteration is summarized in Section 2. The two methods used to extract lists of transliteration pairs are described in Section 3. The joint probability model for transliteration is explained in Section 4. The evaluation and the results in comparison with three other transliteration systems are presented in Section 5. A detailed discussion and error analysis is presented in Section 6. Section 7 concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Transliteration can be done with phoneme-based or grapheme-based models. Knight and Graehl (1998) , Stalls and Knight (1998) , Al-Onaizan and Knight (2002) and Pervouchine et al. (2009) use the phoneme-based approach for transliteration. Kashani et al. (2007) and Al-Onaizan and Knight (2002) use a grapheme-based model to transliterate from Arabic into English. Al-Onaizan and Knight (2002) compare a grapheme-based approach, a phoneme-based approach and a linear combination of both for transliteration. They build a conditional probability model. The graphemebased model performs better than the phonemebased model and the hybrid model. This motivates our use of grapheme-based models.", "cite_spans": [ { "start": 73, "end": 97, "text": "Knight and Graehl (1998)", "ref_id": "BIBREF9" }, { "start": 100, "end": 124, "text": "Stalls and Knight (1998)", "ref_id": "BIBREF17" }, { "start": 142, "end": 155, "text": "Knight (2002)", "ref_id": "BIBREF0" }, { "start": 160, "end": 185, "text": "Pervouchine et al. (2009)", "ref_id": "BIBREF15" }, { "start": 238, "end": 259, "text": "Kashani et al. (2007)", "ref_id": "BIBREF8" }, { "start": 279, "end": 292, "text": "Knight (2002)", "ref_id": "BIBREF0" }, { "start": 378, "end": 391, "text": "Knight (2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "In this paper, we use a grapheme-based approach for transliteration from Hindi to Urdu. The phoneme-based approach would involve the conversion of Hindi and Urdu text into a phonemic representation which is not a trivial task as the short vowel 'a' is not written in Hindi text and no short vowels are written in Urdu text. The difficulty of this additional step would be likely to lead to additional errors. Malik et al. (2008) and Malik et al. (2009) work on transliteration from Hindi to Urdu and Urdu to Hindi respectively. They use the rules of SAMPA (Speech Assessment Methods Pho- Table 1 : Ambiguous Hindi characters (characters which can transliterate to many different Urdu characters) netic Alphabets) and X-SAMPA 1 to develop a phoneme-based mapping scheme between Urdu and Hindi (J C. Wells, 1995) . Malik et al. (2008) reported an accuracy of 97.9% for transliterating Hindi to Urdu. However, this number is not comparable to ours. Some Hindi characters can be ambiguously transliterated to several Urdu characters (see Table 1 ). Malik et al. (2008) do not deal with these ambiguous characters and count any occurrence of an ambiguous character as a correct transliteration in all scenarios. We discuss this further in Section 6.", "cite_spans": [ { "start": 409, "end": 428, "text": "Malik et al. (2008)", "ref_id": "BIBREF12" }, { "start": 433, "end": 452, "text": "Malik et al. (2009)", "ref_id": "BIBREF13" }, { "start": 798, "end": 810, "text": "Wells, 1995)", "ref_id": "BIBREF6" }, { "start": 813, "end": 832, "text": "Malik et al. (2008)", "ref_id": "BIBREF12" }, { "start": 1045, "end": 1064, "text": "Malik et al. (2008)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 588, "end": 595, "text": "Table 1", "ref_id": null }, { "start": 1034, "end": 1041, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "In the previous work, a transliteration system is built on transliteration units learned either automatically from a list of transliteration pairs (Li et al., 2004) , (Pervouchine et al., 2009) or using a heuristic-based method (Ekbal et al., 2006) . We do not have a list of transliteration pairs for the training of our Hindi to Urdu transliteration system. Therefore we use two methods to extract transliteration pairs from parallel data of Hindi/Urdu. In the first approach, we use the transliteration mining algorithm proposed by Sajjad et al. (2011) to extract transliteration pairs. This method does not use any language dependent information. In the second approach, we use a rule-based method to extract transliteration pairs. Both processes are imperfect, meaning that there is noise in the extracted list of transliteration pairs. We build a joint source channel model as described by Li et al. (2004) and Ekbal et al. (2006) on the extracted list of transliteration pairs. The following sections describe the two mining approaches and the model in detail.", "cite_spans": [ { "start": 147, "end": 164, "text": "(Li et al., 2004)", "ref_id": "BIBREF11" }, { "start": 167, "end": 193, "text": "(Pervouchine et al., 2009)", "ref_id": "BIBREF15" }, { "start": 228, "end": 248, "text": "(Ekbal et al., 2006)", "ref_id": "BIBREF3" }, { "start": 535, "end": 555, "text": "Sajjad et al. (2011)", "ref_id": "BIBREF16" }, { "start": 896, "end": 912, "text": "Li et al. (2004)", "ref_id": "BIBREF11" }, { "start": 917, "end": 936, "text": "Ekbal et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "We automatically word-align the parallel corpus and extract a word list, later referred to as \"list of word pairs\" (see Section 5, for details on training data). We use two methods to extract transliteration pairs from the list of word pairs. In the first approach, we automatically extract transliteration pairs using the transliteration mining algorithm as proposed in Sajjad et al. (2011) . We align the transliteration pairs at character level using a character aligner. In the second approach, we use an edit distance metric and handcrafted equivalence rules to extract transliteration pairs from a parallel corpus. We align the list of transliteration pairs at character level using the edit distance metric. The transliteration system is then trained on these character aligned transliteration pairs which is described in Section 5. The following subsections describe the extraction methods in detail.", "cite_spans": [ { "start": 371, "end": 391, "text": "Sajjad et al. (2011)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Extraction of Transliteration Pairs", "sec_num": "3" }, { "text": "In this section, we review the transliteration mining approach described by Sajjad et al. (2011) to automatically extract the transliteration pairs from the list of word pairs. The approach consists of two algorithms, Algorithm 1, which performs an iterative filtering of the word pair list, and Algorithm 2, which determines when Algorithm 1 should be stopped. The details of this process follow. Algorithm 1 is based on an iterative process. In each iteration, it first builds a joint transliteration model using g2p (grapheme-to-phoneme converter (Bisani and Ney, 2008 )) on the current list of word pairs. It then filters out 5% of the word pairs which are least likely to be transliterations according to their normalized joint probability, resulting in a reduced word pair list, after which the next iteration begins. In each iteration the word pair list is reduced by 5%.", "cite_spans": [ { "start": 76, "end": 96, "text": "Sajjad et al. (2011)", "ref_id": "BIBREF16" }, { "start": 550, "end": 571, "text": "(Bisani and Ney, 2008", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Extraction of Transliteration Pairs", "sec_num": "3.1" }, { "text": "Algorithm 2 is used to select the optimal stopping iteration for Algorithm 1. Algorithm 2 is an extension of Algorithm 1. It divides the original list of word pairs into two halves which are used as training and held-out data. The division is done using a special splitting method which keeps the morphologically related word pairs from the list of word pairs either in the training data or in the held-out data. It builds a joint sequence model on the training data (approximately half of the list of word pairs) and filters out those 5% word pairs which are least likely to be transliteration pairs. Then it builds a transliteration system using the Moses toolkit (Koehn et al., 2003) on the filtered data and tests it on the source side of the held-out data. It repeats this process for 100 iterations. The iteration which best predicts the held-out data is selected as the stopping iteration for the transliteration mining algorithm.", "cite_spans": [ { "start": 666, "end": 686, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Extraction of Transliteration Pairs", "sec_num": "3.1" }, { "text": "We first ran Algorithm 2 on the list of word pairs for 100 iterations. It returned the 45th iteration as the best stopping iteration for Algorithm 1. Then we ran Algorithm 1 for 45 iterations and obtained a list of 2245 transliteration pairs. Due to data sparsity, there were two Hindi characters which were missing in the extracted list of transliteration pairs. We could either add complete word examples or just transliteration units of the missing Hindi characters to the list of transliteration pairs. Adding examples will provide context information which may bias the results of the evaluation. Thus we added only the two missing 1-to-1 transliteration units to the list of transliteration pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Extraction of Transliteration Pairs", "sec_num": "3.1" }, { "text": "We align the list of transliteration pairs at the character level using a character aligner 2 . The aligner uses the Forward-Backward algorithm to learn the character alignments between the transliteration pairs. It allows only 0 or 1 character on either side of the transliteration unit. So, a source character can align either to a target character or to \u2205 and a target character can align either to a source character or to \u2205. We get three kinds of alignments of Hindi characters to Urdu characters i.e. \u2205 \u2192 1, 1 \u2192 \u2205 and 1 \u2192 1. We modify the \u2205 \u2192 1 alignments by merging the Urdu character with the left neighboring aligned pair. If it is the left-most character, then it is merged with the right neighboring aligned character pair. Table 2 shows the alignment of Hindi characters with Urdu characters before and after the merging of unaligned Urdu characters. Table 2 : Hindi-Urdu alignment pairs for transliteration where a) shows initial alignment with NULL alignments and b) shows final alignments after merging of NULL alignments", "cite_spans": [], "ref_spans": [ { "start": 735, "end": 742, "text": "Table 2", "ref_id": null }, { "start": 863, "end": 870, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Automatic Extraction of Transliteration Pairs", "sec_num": "3.1" }, { "text": "a) Hindi \u2205 b c \u2205 e f Urdu A X C D \u2205 F b) Hindi b c e f Urdu AX CD \u2205 F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Extraction of Transliteration Pairs", "sec_num": "3.1" }, { "text": "As an alternative to automatic extraction of transliteration pairs, we use our own knowledge of the Hindi and Urdu scripts to make the initial transliteration units. The rules are further extended by looking into available Hindi-Urdu transliteration systems and other resources (Gupta, 2004; Malik et al., 2008; Jawaid and Ahmed, 2009) . Table 3 shows some examples of equivalence rules. Each transliteration unit is assigned a cost. A Hindi character which is always mapped to the same Urdu character is assigned zero cost.", "cite_spans": [ { "start": 278, "end": 291, "text": "(Gupta, 2004;", "ref_id": "BIBREF4" }, { "start": 292, "end": 311, "text": "Malik et al., 2008;", "ref_id": "BIBREF12" }, { "start": 312, "end": 335, "text": "Jawaid and Ahmed, 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 338, "end": 345, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "In some cases, a Hindi character, say H 1 , can be mapped to several different Urdu characters, say U 1 , U 2 and U 3 . We assign an equal cost of 0.3 to all three mappings H 1 to U 1 , H 1 to U 2 and H 1 to U 3 as shown in the last three rows of Table 3 . Table 3 : Hindi-Urdu handcrafted equivalence rules", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 254, "text": "Table 3", "ref_id": null }, { "start": 257, "end": 264, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "The edit distance metric allows insert, delete and replace operations. The handcrafted rules define the cost of replace operations as shown in Table 3. Each insert and delete operation costs 0.6, except for the deletion of Hindi diacritics where the cost is 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "If two identical characters occur next to each other in an Urdu word then either only one character is written with a shadda sign after it or both characters are written next to each other. The shadda sign is treated as a diacritic by most Urdu writers and is thus frequently omitted in Urdu text. We deleted all shadda characters in a preprocessing step in order to obtain a consistent representation. Hindi, on the other hand, uses a special joining symbol between two characters to write conjuncts. If the joining symbol is used between two identical characters then it will be transliterated with a shadda in Urdu. Assume the joining symbol is \"z\" and L is a character in Hindi.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "The occurrence L\"z\"L in Hindi will be transliterated as L in Urdu. In the handcrafted rules, we add separate entries mapping Hindi L\"z\"L to Urdu L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "Urdu and Hindi differ in their word definition for some particular categories. For example, in Hindi the case marker is always attached to the pronoun, whereas in Urdu, the case marker can be written either as a separate token after the pronoun or can be attached to the pronoun. The edit distance metric was modified to avoid penalizing spaces in Urdu text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "The raw list of word pairs contains translations (that are not transliterations), transliterations and alignment errors. We apply the edit distance metric to the list of word pairs and extract the list of transliteration pairs. We optimized the costs on a held-out set. We filter out word pairs with a cost of more than 0.6 thus allowing only one deletion/insertion or at most three ambiguous replacements in the Hindi-Urdu pairs (Table 3 ). If we decrease the filtering threshold or increase the replacement cost, the number of types extracted reduces significantly. We obtained 1695 types in the list of transliteration pairs. Due to data sparsity, there were about 5 Hindi characters which were not covered in the list of transliteration pairs. We added transliteration units for the missing Hindi characters to the list of transliteration pairs.", "cite_spans": [], "ref_spans": [ { "start": 430, "end": 438, "text": "(Table 3", "ref_id": null } ], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "We align the list of word pairs at the character level using the same handcrafted equivalence rules and the edit distance algorithm. We get three kinds of alignments of Hindi characters to Urdu characters i.e. \u2205 \u2192 1, 1 \u2192 \u2205 and 1 \u2192 N . The character alignments produced using the edit distance metric differ from those produced using the character aligner (Section 3.1). The character aligner allows only one character on the source and the target side. The edit distance metric allows a Hindi character to align to more than one Urdu character. We postprocess the alignment \u2205 \u2192 1 as described in Section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Extraction of Transliteration Pairs", "sec_num": "3.2" }, { "text": "The character-based translation probability p char (H, U ) is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Model", "sec_num": "4" }, { "text": "p char (H, U ) = a n 1 \u2208align(H,U ) p(a n 1 ) (1) = a n 1 \u2208align(H,U ) n i=1 p(a i |a i\u22121 i\u2212k ) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Model", "sec_num": "4" }, { "text": "where a i is an aligned pair consisting of the i-th Hindi character h i and a sequence of 0 or more Urdu characters. Usually a Hindi character is aligned with one Urdu character, but some Hindi characters map to zero or two Urdu characters. The short vowels except the short vowel 'a' are always written in Hindi while in Urdu short vowels are usually not written. Hence, Hindi short vowels should be aligned to zero Urdu characters. align(H, U ) is the set of all possible alignments between the characters of U and H.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Model", "sec_num": "4" }, { "text": "During transliteration we need to maximize P (H, U ) over all possible sequences U but we can not efficiently compute the sum over all possible different alignment pairs in equation 1. Therefore we resort to the Viterbi approximation and extract the most probable alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Model", "sec_num": "4" }, { "text": "The parameter k in equation 2 indicates the amount of context used (e.g. if k = 2, we use a trigram model on character pairs). A good value of k for our transliteration system is 4. Table 5 (Section 5) shows the variation of results on different values of k.", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Transliteration Model", "sec_num": "4" }, { "text": "The SRILM-Toolkit (Stolcke, 2002) was applied in the implementation. Add-one smoothing was used for unigrams and Kneser-Ney smoothing was used for order > 1.", "cite_spans": [ { "start": 18, "end": 33, "text": "(Stolcke, 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Transliteration Model", "sec_num": "4" }, { "text": "We use a Hindi-Urdu parallel corpus taken from the EMILLE corpus 3 . In both Urdu and Hindi, there are cases where one character can be represented either as one Unicode character or as a combination of two Unicode characters. These characters are normalized to have only one representation. In Urdu, short vowels are represented with diacritics which are usually missing in written text. In order to keep the corpus consistent, all diacritics were removed from the Urdu corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Test Data", "sec_num": "5.1" }, { "text": "A Hindi news corpus of 5000 tokens (1330 types) was randomly selected from BBC News. The tokens that can be transliterated into Urdu were manually extracted and a test corpus of 819 transliteration pairs was obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Test Data", "sec_num": "5.1" }, { "text": "We automatically generate two word alignments using GIZA++ (Och and Ney, 2003) , and refine them using the grow-diag-final-and heuristic (Koehn et al., 2003) . We extracted a total of 107323 alignment pairs from the sentence aligned parallel corpus of 7007 sentences. The M-N and N-1 alignment pairs were ignored as 3 http://www.emille.lancs.ac.uk/ they are unlikely to be transliterations. Most of the 1-N alignment pairs are cases where the Urdu part of the alignment actually consist of two (or three) words which are sometimes written without a space because of lack of standard writing convention in Urdu. For example (can go ;", "cite_spans": [ { "start": 59, "end": 78, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF14" }, { "start": 137, "end": 157, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "5.2" }, { "text": "d ZA s@kt de ) is alternatively written as (can go ; d ZAs@kt de ) , i.e., without a space before the \"s\" sound. These are always written as a single token in Hindi. We drop 1-N alignments with gaps, but keep alignments with contiguous words. We refer to the word-aligned corpus generated from 1-1 and 1-N alignments as \"list of word pairs\" later on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "5.2" }, { "text": "Our first baseline is a phrase-based machine translation system (PSMT) for transliteration built using the Moses toolkit. We use the default settings but the distortion limit is set to zero (no reordering). Minimum error rate training (MERT) is used to optimize the parameters. The list of transliteration pairs is divided into 90% training and 10% development data (used for MERT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based MT", "sec_num": "5.3.1" }, { "text": "We also compare our systems with three Hindi-Urdu transliteration systems, HUMT 4 , CRULP 5 and Malerkotla 6 (MAL), available on the internet. HUMT is based on finite state transducers. It implements a phoneme-based mapping scheme between Hindi and Urdu. The HUMT system is described in Section 2 (Malik et al., 2008) . CRULP is a rule-based transliterator which uses a direct orthographic mapping between Hindi and Urdu. Little information is available on the method of the Malerkotla transliterator. If there are two legal transliterations of a Hindi word, it transliterates it to the most frequent Urdu word. We suspect that Malerkotla may use a bilingual word list to override the basic transliteration scheme.", "cite_spans": [ { "start": 297, "end": 317, "text": "(Malik et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "External Transliterators", "sec_num": "5.3.2" }, { "text": "Phrase-based MT: We first build a PSMT system on the list of word pairs. Due to the amount of noise in the training data, it shows 45.9% accuracy. The low score of the PSMT system supports our The PSMT is then trained on the transliteration pairs extracted using the automatic method and the rule-based method. The purpose of this experiment is to compare the quality of the extracted lists by building an identical model on them. The PSMT shows best accuracy on the transliteration pairs extracted using the rule-based method (Table 4). The rule-based extraction method is based on high precision and thus extracted fewer transliteration pairs than the the automatic method. The list extracted using the automatic method contains close transliterations as well, which are word pairs which only differ by one or two characters from correct transliterations. The close transliteration pairs help to learn transliteration information but also add noise to the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5.4" }, { "text": "Our systems: We build two versions of our system, using the list of transliteration pairs extracted in Section 3.1 (AUTO) and using the list of transliteration pairs extracted in Section 3.2 (RULE). We use a context size of k = 4 (see eq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5.4" }, { "text": "2) for our systems. The results of our transliteration system RULE with different context sizes are shown in Table 5 . The accuracy of the transliteration system is stable at context sizes greater than three. 1 2 3 4 5 64.5% 76.3% 80.7% 81.6% 81.6% Table 5 : Accuracies of RULE for different context sizes AUTO shows an accuracy of 76% on the test data of 819 types as shown in Table 6 . It could not learn certain language specific phenomena due to data sparsity. The system had problems to learn the mapping of a Hindi character to an Urdu conjunct. The system could not learn the shadda cases (see Section 3.2). There are 18 types (2% of the test data) with shadda phenomena. AUTO correctly transliterates only 28% of these types. This might be due to the character aligner which can not capture the information where a Hindi character can be aligned to more than one Urdu character and vice versa. The other factor is the preprocess-AUTO RULE MAL CRULP HUMT 76% 81.6% 73.4% 69.8% 69.5% Table 6 : Accuracies of the joint model built on lists from AUTO and RULE, compared with the three baseline transliterators ing step where we delete diacritics and the character joiner from the Hindi word aligned corpus. The rule-based system (RULE) shows the best results of 81.6%. It obtains 100% accuracy in transliterating the shadda cases. Due to the inclusion of transliteration units in the training data (Section 3.2), it contains at least one entry of every transliteration unit in its training corpus.", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 5", "ref_id": null }, { "start": 249, "end": 256, "text": "Table 5", "ref_id": null }, { "start": 378, "end": 385, "text": "Table 6", "ref_id": null }, { "start": 990, "end": 997, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5.4" }, { "text": "Results of other transliteration systems: We test three other transliterators (HUMT, CRULP and MAL) on the test corpus of 819 types. The results are shown in Table 6 . The HUMT system performs worst with an accuracy of 69.5%. The HUMT system does not handle ambiguous characters as mentioned in Section 2. It maps each ambiguous Hindi character to the most frequent matching Urdu character without taking into account the transliteration context. CRULP has difficulty in disambiguating Hindi characters which map to several different Urdu characters. Table 11 shows some examples of such transliteration units. The ambiguous Hindi characters (Table 1) can not be predicted correctly on the basis of the neighboring characters but these Hindi characters (Table 11 ) can be predicted correctly by looking at the context. MAL mostly performs well on ambiguous Hindi characters. The results of MAL are discussed in detail in the next section.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 6", "ref_id": null }, { "start": 551, "end": 560, "text": "Table 11", "ref_id": "TABREF5" }, { "start": 643, "end": 652, "text": "(Table 1)", "ref_id": null }, { "start": 754, "end": 763, "text": "(Table 11", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5.4" }, { "text": "In this section, we discuss the errors made by the transliteration systems by dividing the test data into different subclasses. The transliteration between Hindi and Urdu is strongly motivated by the language of origin and script of the word to be transliterated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "Proper nouns: The test corpus contains a large number of words borrowed from other languages which are differently transliterated to Hindi and to Urdu. Words borrowed from Arabic contain ambiguous characters which make the transliteration task more challenging. Proper nouns form 19% of the test corpus. In a second set of experiments, we evaluated only on the proper nouns from the test corpus. All five transliterators perform poorly in transliterating proper nouns as shown in Table 7 . Table 7 : Accuracies of AUTO, RULE and three baseline transliterators on proper nouns", "cite_spans": [], "ref_spans": [ { "start": 480, "end": 487, "text": "Table 7", "ref_id": null }, { "start": 490, "end": 497, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "Most of the proper nouns were names borrowed from English and other languages. We observed that there is sometimes a difference between the pronunciation of borrowed words in Hindi and Urdu. Consider the English name \"Donald\": the character \"a\" in \"Donald\" is transliterated using a long vowel into Hindi as (don-Ald) and using a short vowel into Urdu as (don@ld). There are some foreign words which are directly transliterated in Hindi and borrowed from another language in Urdu. Consider the word \"America\" which is transliterated as (@\"mErIk@) in Hindi but borrowed from Arabic as (A@mrikA) in Urdu. Table 7 shows the results of our transliterators in comparison with other transliterators.", "cite_spans": [], "ref_spans": [ { "start": 603, "end": 610, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "Ambiguous characters: The ambiguous characters frequently occur in Hindi text and are found in 52% of the types in the test corpus. There are four ambiguous characters as shown in Table 1 . For each such character, we extract the tokens containing this character from the test corpus. There were 15%, 19%, 13% and 3.8% occurrences of words with (h), (s), (t d) and (z) respectively. Table 8 shows the results of the three baseline transliterators on these four cases.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 1", "ref_id": null }, { "start": 383, "end": 390, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "MAL CRULP HUMT (h) 74.4% 60.8% 60% (s) 69.8% 62.9% 66% (t d) 76.4% 66% 66% (z) 32.3% 41.9%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "3.2% Table 8 : Results of the baseline transliteration systems on words containing ambiguous characters Malerkotla shows poor results on words containing (z). These words form only 3.8% types of the test corpus and thus do not substantially affect the overall accuracy achieved by Malerkotla. Table 9 shows the results of Malerkotla and our transliteration systems. RULE performs best on all cases of ambiguous characters.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 8", "ref_id": null }, { "start": 293, "end": 300, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "Sometimes, the use of several ambiguous characters in a string leads to two legal Urdu words as shown in Table 9 : Results of Malerkotla and our transliteration systems on words containing ambiguous characters Table 11 shows some examples. In the first column, the Hindi characters may map to any of the three Urdu characters in the same row. Sometimes, there is no phonological difference between the Urdu characters but conventionally they are written in one way or the other. Pronunciation differences between Hindi and Urdu speakers: Different pronunciations of Hindi and Urdu speakers also cause confusion for the transliteration systems. For example, the English word \"bazaar\" is written in Hindi as (bAd ZAr) and in Urdu as (bAzAr). The transliteration system has to disambiguate by mapping the character representing \"d Z\" in Hindi to either the \"d Z\" sound or the \"z\" sound in Urdu. Table 12 shows some of these examples.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 9", "ref_id": null }, { "start": 210, "end": 219, "text": "Table 11", "ref_id": "TABREF5" }, { "start": 893, "end": 901, "text": "Table 12", "ref_id": null } ], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "N-best analysis of RULE and AUTO: The transliterators show poor performance on words containing ambiguous characters. In the 20-best output, we find the correct solution for many words with ambiguous characters as shown in Table 13. However, if a word contains two ambigu- Table 12 : Pronunciation differences between Hindi and Urdu ous characters, it was difficult for the transliterator to transliterate it correctly. We hope that the tokens with ambiguous characters can be correctly transliterated using context by a statistical machine translation system. The unknown transliterations in the 20-best output will get lower scores from the language model as compared to known words. If two words in the 20-best output are known, the language model helps to choose the right output based on the word context.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 281, "text": "Table 12", "ref_id": null } ], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "The 10-best and 20-best results of AUTO are competitive with RULE 7 . The automatically extracted list obtains high recall and thus contains close transliterations which RULE's list does not contain. Close transliterations are word pairs which only differ by one or two characters from correct transliterations. The close transliteration pairs are useful for the transliteration system as they provide information about transliteration units and help avoid the problem of data sparseness. However, the transliteration system also learns noise from them and might not produce correct 1-best output. Table 14 shows two examples which are correctly transliterated by AUTO but are wrongly transliterated by RULE in the 10-best output. These examples are difficult to transliterate as most of the characters are ambiguous and have more than one possible transliteration. The system built on AUTO is able to transliterate them correctly as it contains more instances of infrequent ambiguous characters. For the incorporation of a transliteration model in a machine translation system, AUTO would be a better option as it is language independent and has better 10-best and 20-best scores.", "cite_spans": [], "ref_spans": [ { "start": 598, "end": 606, "text": "Table 14", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Discussion & Error Analysis", "sec_num": "6" }, { "text": "We have implemented a joint source channel model to transliterate Hindi words into Urdu 7 We also aligned AUTO with the edit-distance based aligner to verify that alignments differences were not important. The results dropped a little less than 1-point for 1-best, 10-best and 20-best, which is still better than RULE for 10best and 20-best, so the differences in alignment did not unduly influence the results. AUTO RULE 1-Best 76% 81.6% 10-Best 93.8% 91.5% 20-Best 95.1% 92.3% Table 13 : Comparison of 1-Best, 10-Best and 20-Best outputs of our transliteration systems Table 14 : In the 10-best output, these examples are correctly transliterated by AUTO but are wrongly transliterated by RULE words. We have used two approaches to extract transliteration pairs from a parallel corpus of Hindi/Urdu -an unsupervised transliteration mining method and a method based on handcrafted rules. We then built models on the automatically aligned orthographic transliteration units of the extracted Hindi/Urdu transliteration pairs. Our best transliteration system achieved an accuracy of 81.6% which is 8% better than the best of three other systems. The 10-best and 20-best results of our transliteration system built on the automatically extracted transliteration pairs showed that it is suitable for integration with machine translation which will allow the use of translation context to choose the best transliteration (Hermjakob et al., 2008; Durrani et al., 2010) .", "cite_spans": [ { "start": 88, "end": 89, "text": "7", "ref_id": null }, { "start": 1416, "end": 1440, "text": "(Hermjakob et al., 2008;", "ref_id": "BIBREF5" }, { "start": 1441, "end": 1462, "text": "Durrani et al., 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 479, "end": 487, "text": "Table 13", "ref_id": null }, { "start": 571, "end": 579, "text": "Table 14", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "SAMPA and XSAMPA are used to represent the IPA symbols using 7-bit printable ASCII characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We were unable to get character alignments from g2p. We use a separate character aligner to align the list of transliteration pairs at the character level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.puran.info/HUMT/HUMT.aspx 5 http://www.crulp.org/software/langproc/h2utransliterator.html 6 http://www.malerkotla.org/Transh2u.aspx", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors wish to thank the anonymous reviewers for their comments. Hassan Sajjad and Nadir Durrani were funded by the Higher Education Commission (HEC) of Pakistan. Helmut Schmid was supported by Deutsche Forschungsgemeinschaft grant SFB 732. Alexander Fraser was funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors' views.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Machine transliteration of names in Arabic text", "authors": [ { "first": "Yaser", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "ACL Workshop on Computational Approaches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaser Al-Onaizan and Kevin Knight. 2002. Machine transliteration of names in Arabic text. In ACL Workshop on Computational Approaches to Semitic Languages, Morristown, NJ, USA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Jointsequence models for grapheme-to-phoneme conversion", "authors": [ { "first": "Maximilian", "middle": [], "last": "Bisani", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2008, "venue": "Speech Communication", "volume": "", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conver- sion. Speech Communication, 50(5).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Hindi-to-Urdu machine translation through transliteration", "authors": [ { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nadir Durrani, Hassan Sajjad, Alexander Fraser, and Helmut Schmid. 2010. Hindi-to-Urdu machine translation through transliteration. In Proceedings of the 48th Annual Conference of the Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A modified joint source-channel model for transliteration", "authors": [ { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Sudip", "middle": [], "last": "Kumar Naskar", "suffix": "" }, { "first": "Sivaji", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL poster sessions", "volume": "", "issue": "", "pages": "191--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asif Ekbal, Sudip Kumar Naskar, and Sivaji Bandy- opadhyay. 2006. A modified joint source-channel model for transliteration. In Proceedings of the COLING/ACL poster sessions, pages 191-198, Syd- ney, Australia. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Aligning Hindi and Urdu bilingual corpora for robust projection. Masters project dissertation", "authors": [ { "first": "Swati", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swati Gupta. 2004. Aligning Hindi and Urdu bilin- gual corpora for robust projection. Masters project dissertation, Department of Computer Science, Uni- versity of Sheffield.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Name translation in statistical machine translation -learning when to transliterate", "authors": [ { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "389--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulf Hermjakob, Kevin Knight, and Hal Daum\u00e9 III. 2008. Name translation in statistical machine trans- lation -learning when to transliterate. In Proceed- ings of ACL-08: HLT, pages 389-397, Columbus, Ohio. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Computer-coding the IPA: a proposed extension of SAMPA", "authors": [ { "first": "J", "middle": [ "C" ], "last": "Wells", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J C. Wells. 1995. Computer-coding the IPA: a pro- posed extension of SAMPA. University College, London.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hindi to Urdu conversion: beyond simple transliteration", "authors": [ { "first": "Bushra", "middle": [], "last": "Jawaid", "suffix": "" }, { "first": "Tafseer", "middle": [], "last": "Ahmed", "suffix": "" } ], "year": 2009, "venue": "Conference on Language and Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bushra Jawaid and Tafseer Ahmed. 2009. Hindi to Urdu conversion: beyond simple transliteration. In Conference on Language and Technology 2009, La- hore, Pakistan.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic transliteration of proper nouns from Arabic to English", "authors": [ { "first": "M", "middle": [], "last": "Mehdi", "suffix": "" }, { "first": "Fred", "middle": [], "last": "Kashani", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Popowich", "suffix": "" }, { "first": "", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2007, "venue": "Second Workshop on Computational Approaches to Arabic Script-based Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mehdi M. Kashani, Fred Popowich, and Anoop Sarkar. 2007. Automatic transliteration of proper nouns from Arabic to English. In Second Workshop on Computational Approaches to Arabic Script-based Languages, Stanford University, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Machine transliteration", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "4", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathan Graehl. 1998. Ma- chine transliteration. Computational Linguistics, 24(4):599-612.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference", "volume": "", "issue": "", "pages": "127--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of the Human Language Technology and North American Association for Computational Lin- guistics Conference, pages 127-133, Edmonton, Canada.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A joint source-channel model for machine transliteration", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Min", "suffix": "" }, { "first": "Su", "middle": [], "last": "Jian", "suffix": "" } ], "year": 2004, "venue": "ACL '04: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "159--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Zhang Min, and Su Jian. 2004. A joint source-channel model for machine transliteration. In ACL '04: Proceedings of the 42nd Annual Meet- ing on Association for Computational Linguistics, pages 159-166, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Hindi Urdu machine transliteration using finite-state transducers", "authors": [ { "first": "M G Abbas", "middle": [], "last": "Malik", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Boitet", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M G Abbas Malik, Christian Boitet, and Pushpak Bhat- tacharyya. 2008. Hindi Urdu machine translitera- tion using finite-state transducers. In Proceedings of the 22nd International Conference on Computa- tional Linguistics, Manchester, UK.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A hybrid model for Urdu Hindi transliteration", "authors": [ { "first": "M G Abbas", "middle": [], "last": "Malik", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Boitet", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Named Entities Workshop, ACL-IJCNLP, Suntec", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M G Abbas Malik, Laurent Besacier, Christian Boitet, and Pushpak Bhattacharyya. 2009. A hybrid model for Urdu Hindi transliteration. In Proceedings of the 2009 Named Entities Workshop, ACL-IJCNLP, Sun- tec, Singapore.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "J", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transliteration alignment", "authors": [ { "first": "Vladimir", "middle": [], "last": "Pervouchine", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th IJCNLP of the AFNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Pervouchine, Haizhou Li, and Bo Lin. 2009. Transliteration alignment. In Proceedings of the 47th Annual Meeting of the Association for Com- putational Linguistics and the 4th IJCNLP of the AFNLP, Suntec, Singapore.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An algorithm for unsupervised transliteration mining with an application to word alignment", "authors": [ { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassan Sajjad, Alexander Fraser, and Helmut Schmid. 2011. An algorithm for unsupervised translitera- tion mining with an application to word alignment. In Proceedings of the 49th Annual Conference of the Association for Computational Linguistics, Port- land, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Translating names and technical terms in Arabic text", "authors": [ { "first": "Bonnie", "middle": [ "G" ], "last": "Stalls", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the COLING/ACL Workshop on Computational Approches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie G. Stalls and Kevin Knight. 1998. Translating names and technical terms in Arabic text. In Pro- ceedings of the COLING/ACL Workshop on Compu- tational Approches to Semitic Languages.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Intl. Conf. Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 2002. SRILM -an extensible lan- guage modeling toolkit. In Intl. Conf. Spoken Lan- guage Processing, Denver, Colorado.", "links": null } }, "ref_entries": { "TABREF1": { "type_str": "table", "html": null, "content": "