{ "paper_id": "N15-1043", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:33:40.755304Z" }, "title": "Latent Domain Word Alignment for Heterogeneous Corpora", "authors": [ { "first": "Hoang", "middle": [], "last": "Cuong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam Science", "location": { "addrLine": "Park 107", "postCode": "1098 XG", "settlement": "Amsterdam", "country": "The Netherlands" } }, "email": "" }, { "first": "Khalil", "middle": [], "last": "Sima'an", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam Science", "location": { "addrLine": "Park 107", "postCode": "1098 XG", "settlement": "Amsterdam", "country": "The Netherlands" } }, "email": "k.simaan@uva.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This work focuses on the insensitivity of existing word alignment models to domain differences, which often yields suboptimal results on large heterogeneous data. A novel latent domain word alignment model is proposed, which induces domain-conditioned lexical and alignment statistics. We propose to train the model on a heterogeneous corpus under partial supervision, using a small number of seed samples from different domains. The seed samples allow estimating sharper, domain-conditioned word alignment statistics for sentence pairs. Our experiments show that the derived domain-conditioned statistics, once combined together, produce notable improvements both in word alignment accuracy and in translation accuracy of their resulting SMT systems.", "pdf_parse": { "paper_id": "N15-1043", "_pdf_hash": "", "abstract": [ { "text": "This work focuses on the insensitivity of existing word alignment models to domain differences, which often yields suboptimal results on large heterogeneous data. A novel latent domain word alignment model is proposed, which induces domain-conditioned lexical and alignment statistics. We propose to train the model on a heterogeneous corpus under partial supervision, using a small number of seed samples from different domains. The seed samples allow estimating sharper, domain-conditioned word alignment statistics for sentence pairs. Our experiments show that the derived domain-conditioned statistics, once combined together, produce notable improvements both in word alignment accuracy and in translation accuracy of their resulting SMT systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word alignment currently constitutes the basis for phrase extraction and reordering in phrase-based systems, and its statistics provide lexical parameters used for smoothing the phrase pair estimates. For over two decades since IBM models (Brown et al., 1993) and the HMM alignment model (Vogel et al., 1996) , word alignment remains an active research line, e.g., see recent work (Simion et al., 2013; Tamura et al., 2014; Chang et al., 2014) .", "cite_spans": [ { "start": 239, "end": 259, "text": "(Brown et al., 1993)", "ref_id": "BIBREF2" }, { "start": 288, "end": 308, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF38" }, { "start": 381, "end": 402, "text": "(Simion et al., 2013;", "ref_id": "BIBREF34" }, { "start": 403, "end": 423, "text": "Tamura et al., 2014;", "ref_id": "BIBREF37" }, { "start": 424, "end": 443, "text": "Chang et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "During the past years we witnessed an increasing need to collect and use large heterogeneous parallel corpora from different domains and sources, e.g., News, Wikipedia, Parliament Proceedings. It is tacitly assumed that assembling a larger corpus should improve a phrase-based system coverage and performance. Recent work (Sennrich et al., 2013; Carpuat et al., 2014; Cuong and Sima'an, 2014b; Kirchhoff and Bilmes, 2014; Cuong and Sima'an, 2014a) shows that this is not necessarily true as phrase translations as well as (bi-and monolingual) word co-occurrence statistics could differ across domains. This suggests that the word alignment quality obtained from IBM and HMM alignment models might also be affected in heterogeneous corpora.", "cite_spans": [ { "start": 322, "end": 345, "text": "(Sennrich et al., 2013;", "ref_id": "BIBREF33" }, { "start": 346, "end": 367, "text": "Carpuat et al., 2014;", "ref_id": "BIBREF3" }, { "start": 368, "end": 393, "text": "Cuong and Sima'an, 2014b;", "ref_id": "BIBREF8" }, { "start": 394, "end": 421, "text": "Kirchhoff and Bilmes, 2014;", "ref_id": "BIBREF24" }, { "start": 422, "end": 447, "text": "Cuong and Sima'an, 2014a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Intuitively, in heterogeneous data certain words are present across many domains, whereas others are more specific to few domains. This suggests that the translation probabilities for words will be as fractioned as the diversity of its translations across the domains. Furthermore, because the IBM and HMM alignment models use context-insensitive conditional probabilities, in heterogeneous corpora the estimates of these probabilities will be aggregated over different domains. Both issues could lead to suboptimal word alignment quality. Surprisingly, the insensitivity of the existing IBM and HMM alignment models to domain differences has not received much attention thus far (see the study of Bach et al. (2008) and Gao et al. (2011) for reference in the literature). We conjecture that this is because it is not fully clear how to define what constitutes a (sub)-domain. In this paper we propose to exploit the contrast between the alignment statistics in a handful of seed samples from different domains in order to induce domain-conditioned probabilities for each sentence pair in the heterogeneous corpus. Crucially, some sentence pairs will be more similar to a seed domain than others, whereas some sentence pairs might be dissimilar to all seed domains. The number and choice of seed domains depends largely on the available resources but intuitively these seed domains are chosen to be relevant to parts of the heterogeneous corpus. A small number of such seeds can be expected to notably improve word alignment accuracy. In fact, a single seed sample already allows us to exploit the contrast between two parts in the corpus: similar or dissimilar to the seed data.", "cite_spans": [ { "start": 698, "end": 716, "text": "Bach et al. (2008)", "ref_id": "BIBREF0" }, { "start": 721, "end": 738, "text": "Gao et al. (2011)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Considering the small seed samples as partial supervision, in this paper we explore the question: how to obtain better word alignment in a heterogeneous, mix-of-domains corpus? We present a novel latent domain HMM alignment model, which aims to tighten the probability estimates of the generative alignment process of a sentence pair, and of the probability estimates of the sentence pair itself for a specific domain. We also present an accompanying training regime guided by partial supervision using the seed samples, exploiting the contrast between the domain-conditioned alignment statistics in these samples. This way we aim for an alignment model that is more domain-sensitive than the original HMM alignment model. Once the domainconditioned statistics are induced, we discuss how to combine them together to express the probability of a sentence pair as a mixture over specific domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, we report experimental results over heterogeneous corpora of 1M, 2M and 4M sentence pairs, where we are provided domain information for different samples of 10%, 5% and 2.5% of the heterogeneous data respectively. A large number of experiments are reported, showing that the latent domain HMM model produces notable improvements in word alignment accuracy over the original HMM alignment model. Furthermore, the translation accuracy of the resulting SMT systems is significantly improved across four different translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we briefly review the HMM alignment model (Vogel et al., 1996) . The generative story of the model is shown in Figure 1 . The latent states take values from the target language words and generate source language words.", "cite_spans": [ { "start": 59, "end": 79, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 128, "end": 136, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "Formally, we use e = (e 1 , . . . , e I ) to denote the target sentence with length I and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "f = (f 1 , . . . , f J ) f j\u22121 f j f j+1 a j\u22121 a j a j+1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "Observed layer (source words)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "Latent alignment layer (target words)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "Figure 1: HMM alignment model with observed and latent alignment layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "to denote the source sentence with length J. For an alignment a = (a 1 , . . . , a J ) of a sentence pair e, f , the model factors P (f, a| e) into the word translation and transition probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "P (f, a| e) = J j=1 P (f j | e a j )P (a j | a j\u22121 ). (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "Here, P (f j | e a j ) represents the word translation probabilities and P (a j | a j\u22121 ) 1 represents the transition probabilities between positions. Note that P (a j | a j\u22121 ) depends only on the distance (a j \u2212 a j\u22121 ). Note also that the first-order dependency model is an extension of the uniform dependency model and zero-order dependency model of IBM models 1 and 2, respectively. In this work, we model explicitly distances in the range \u00b15. Note that null-links are also explicitly added in our implementation, following Och and Ney (2003) and Graca et al. (2010) .", "cite_spans": [ { "start": 529, "end": 547, "text": "Och and Ney (2003)", "ref_id": "BIBREF29" }, { "start": 552, "end": 571, "text": "Graca et al. (2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "Once the HMM alignment model is trained, the most probable alignment,\u00e2 for each sentence pair can be computed by:\u00e2 = argmax a P (f, a| e). Here, the search problem can be solved by the Viterbi algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM Alignment Model", "sec_num": "2" }, { "text": "Because the heterogeneous data contains a mix of diverse domains, the induced statistics derived from word alignment models reflect translation preferences aggregated over these domains. In this sense, they can be considered domain-confused statistics (Cuong and Sima'an, 2014a) . This work thus focuses on more representative statistics: the domainconditioned word alignment statistics, i.e., the statistics with respect to each of the diverse domains.", "cite_spans": [ { "start": 252, "end": 278, "text": "(Cuong and Sima'an, 2014a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "By introducing a latent variable D representing domains of the heterogeneous data, we aim to learn the D-conditioned word alignment model P (f, a| e, D). 2 Relying on the HMM alignment model, our latent domain HMM alignment model factors P (f, a| e, D) into the domain-conditioned word translation and transition probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "f j\u22121 f j f j+1 a j\u22121 a j a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "P (f, a|e, D) = J j=1 P (f j |e a j , D)P (a j |a j\u22121 , D).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "(2) The generative story of the model is shown in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f j |e a j , D) = P (f j |e a j )P (D|f j , e a j ) f P (f j |e a j )P (D|f j , e a j ) ,", "eq_num": "(3)" } ], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (a j |a j\u22121 , D) = P (a j |a j\u22121 )P (D|a j , a j\u22121 ) a j P (a j |a j\u22121 )P (D|a j , a j\u22121 ) .", "eq_num": "(4)" } ], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "With an additional latent domain layer, it becomes crucial to train the model in an efficient way. As suggested by Eq. 3 and 4, we could simplify training by breaking up the estimation process into two steps. That is, we train alignment parameters, P (\u2022| \u2022) or domain parameters, P (D| \u2022, \u2022) first, hold them fixed before training the other kind of the parameters. 3 Instead, in this work we design an algorithm that trains both of them simultaneously via training domain-conditioned parameters P (", "cite_spans": [ { "start": 365, "end": 366, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "\u2022| , \u2022, D) directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "2 Note that P (f, a| e, D) contains their former P (f, a| e) as special case, i.e., P (f, a| e, D) = P (f, a| e)P (D| f, a, e) f a P (f, a| e)P (D| f, a, e) . 3 This training scheme is in fact applied in the work of Cuong and Sima'an (2014a), however, for a different purpose.", "cite_spans": [ { "start": 159, "end": 160, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Latent Domain HMM Alignment Model", "sec_num": "3" }, { "text": "Basically, our model can be viewed as having a set,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "\u0398 of N subsets of domain-conditioned pa- rameters, \u0398 D for N different domains, i.e., \u0398 = {\u0398 D 1 , . . . , \u0398 D N }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "In this work, to simplify the learning problem we assume that the domains are very different from each other. If this assumption does not hold, the learning problem would shift from single-label learning to multiple-label learning. We leave this extension for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "Our training procedure seeks the parameters \u0398 that maximize the log-likelihood, L of the data: , a) . There, however, does not exist a closed-form solution for maximizing L, and EM comes as an alternative solution to fit the model. EM maximizes L via blockcoordinate ascent on a \"free energy\" lower bound F(q, \u0398) (Neal and Hinton, 1999), using an auxiliary distribution q over both the latent variables:", "cite_spans": [ { "start": 95, "end": 99, "text": ", a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "L = f, e log D a P \u0398 D (f, e, D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "F(q, \u0398) = f, e D a q log P \u0398 D (a, D, f, e) q .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "In the E-step of the EM algorithm, we fix \u0398 and aim to find the distribution q * that maximizes F(q, \u0398) over the heterogeneous data. Simple mathematics lead to F(q,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "\u0398) = f, e log P \u0398 (f, e) \u2212 KL[q || P \u0398 D (a, D| f, e)], where KL[\u2022 || \u2022]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "is the Kullback-Leiber divergence between two distributions. The distribution q * can be thus derived as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "q * = argmax q F(q, \u0398) = argmin q KL[q || P \u0398 D (a, D| f, e)] = P \u0398 D (f, a| e, D) a P \u0398 D (f, a| e, D) P \u0398 D (D| f, e).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "Here, P \u0398 D (D| f, e) aims to exploit the contrast between the domain-sensitive alignment statistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "Assigning higher probability to one domain forces lower probability assignment to other domains. Note that P \u0398 D (f, a| e, D) is given in Eq. 2 and a P \u0398 D (f, a| e, D) can be computed efficiently using dynamic programming. 4 Meanwhile, P \u0398 D (D| f, e) can be derived by Bayes' rule, i.e.,", "cite_spans": [ { "start": 224, "end": 225, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "P \u0398 D (D| f, e) \u221d P \u0398 D (f, e| D)P \u0398 D (D).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "Here, the estimation of the domain prior parameters is easy, (c) denotes current iteration estimates, and P (+) denotes the re-estimates.", "cite_spans": [ { "start": 61, "end": 64, "text": "(c)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "P \u0398 D (D) \u221d f, e P \u0398 D (D| f, e). The esti- mation of P \u0398 D (f, e| D) raises a task of defining a E-step \u2200D \u2208 {D 1 , . . . , D N } do c(D; f, e) = P (c) (D| f, e) c(f | e; f, e, D) = P (c) (D| f, e) a P (c) (a| f, e, D) J j=1 \u03b4(f, f j ) I i=0 \u03b4(e, e i ) c(i| i ; f, e, D) = P (c) (D| f, e) a P (c) (a| f, e, D) J j=1 \u03b4(a j , i)\u03b4(a j\u22121 , i ) M-step \u2200D \u2208 {D 1 , . . . , D N } do P (+) (f |e, D) = f,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "generative process for every sentence pair in the heterogeneous data with respect to a specific domain. Following (Cuong and Sima'an, 2014b), we factor it into two kinds of models in a symmetrized strategy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "P \u0398 D (f, e| D) \u221d P \u0398 D (e| D)P \u0398 D (f| e, D) + P \u0398 D (f| D)P \u0398 D (e| f, D) . Basically, P \u0398 D (\u2022| \u2022, D)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "can be thought of as the domain-conditioned translation models, aiming to model how well a target/source sentence is generated over a source/target sentence with respect to a domain. 5 Meanwhile, P \u0398 D (\u2022| D) can be thought of as the domain-conditioned language models (LMs), aiming to model how fluent a source/target sentence with respect to a domain. For simplicity, once the domain-conditioned LMs are trained, they will stay fixed during training, i.e., LM probabilities are not parameters in our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "In the M-step of the EM algorithm, we fix the derived q * and aim to find the parameter set \u0398 * that maximizes F(q, \u0398) over the data. This can be (easily) done by using q * to softly fill in the values of a and D to estimate model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "In summary, the model has three kinds of parameters -word translation, word transition, and domain prior parameters. We now summarize the training via presenting the pseudocode.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudocode", "sec_num": null }, { "text": "First, we present expected count notations with respect to domains for the parameters. We use c(f | e; f, e, D) to denote the expected counts that word e aligns to word f . We use c(i| i ; f, e, D) to denote the expected counts that two certain con-secutive source words j and j \u2212 1 align to two target words i and i respectively, i.e., j aligns to i and j \u22121 aligns to i . Finally, we also use c(D; f, e) to denote the expected count of domain priors. Note that all the expected counts are in the translation (f| e). Figure 3 represents the pseudocode.", "cite_spans": [], "ref_spans": [ { "start": 518, "end": 526, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Pseudocode", "sec_num": null }, { "text": "We now discuss remaining issues on how to guide the learning with partial supervision, i.e., how to use the given domain information of seed samples to guide the learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning with Partial Supervision", "sec_num": "4" }, { "text": "The values of D \u2208 [1..(N + 1)] depends on the N available seed samples plus the so-called \"out-domain,\" i.e., the part of the heterogeneous data that is dissimilar to all of the N sample domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of Domains", "sec_num": null }, { "text": "Parameter Initialization We first discuss how to initialize the domain prior parameters. If a sentence pair f, e belongs to a sample with a pre-specified domain D i , we initialize P (D i | f, e) close to 1, and, P (D i | f, e) close to 0 for other domains i , i = i. Furthermore, we uniformly create the domain prior parameters for the rest of sentence pairs. Uniform initialization for the domain-conditioned alignment parameters is also a reasonable option. Nevertheless, a more effective way is to make use of the domain-specific seed samples and the pool of the rest sentence pairs in the heterogeneous data. 6 That is, we train the model on each of the samples, assign-ing the derived probabilities as the initialization for their corresponding domain-conditioned alignment parameters. In our implementation, one EM iteration is usually dedicated for this. It should be noted that we ignore the domain prior parameters in the model during the period.", "cite_spans": [ { "start": 614, "end": 615, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Number of Domains", "sec_num": null }, { "text": "Parameter Constraints During training, it would be also necessary to keep the domain prior parameters fixed for all sentence pairs that belong to seed samples. This can be thought of as the constraints derived from the partial knowledge, guiding the learning to a desirable parameter space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of Domains", "sec_num": null }, { "text": "We now discuss how to train the domain-conditioned LMs with partial supervision. It would be reasonable to use the domain-specific seed samples to train their exemplifying domain-conditioned LMs, and the pool of the rest sentence pairs to train the out-domain LMs. Nevertheless, the out-domain LMs trained on such a big corpus could dominate the other domain-conditioned LMs. Following Cuong and Sima'an (2014b), we rather create a \"pseudo\" outdomain sample to train the out-domain LMs, i.e., the creation is via an inspired burn-in period. In brief, an EM iteration is dedicated just to compute P (D OU T | f, e) for all sentences, ranking them and select a small subset with highest score as the (on the fly) pseudo out-domain sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain-conditioned LMs training", "sec_num": null }, { "text": "Note that our partial learning framework is very simple. There are various advanced learning framework that are also applicable with the partial supervision, e.g., Posterior Regularization . This leaves much space for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain-conditioned LMs training", "sec_num": null }, { "text": "At test time, assigning each sentence pair to a single most likely domain (hard decision) is likely to result in sub-optimal performance. 7 Instead we average over domains (soft decision) while predicting the translation. Formally for each sentence pair, e, f , we can find their best Viterbi alignment,\u00e2 as 7 Later experiments on word alignment will confirm this.", "cite_spans": [ { "start": 308, "end": 309, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Domain-conditioned Decoding", "sec_num": "5" }, { "text": "follows: Here, we derive the last equation by applying Bayes' rule to P (D| e), i.e., P (D| e) \u221d P (e| D)P (D). Interestingly, our Viterbi decoding now relies on a mix of domain-conditioned statistics for each sentence pair. The computing of term D (a) for all possible alignments, a, however, is intractable, making the search problem difficult. Inspired by Liang et al. (2006) , we opt instead for a heuristic objective function as follows 8 : D) . 5Here, note that p is a lower bound for p, when 0 \u2264 p \u2264 1, according to Jensen's inequality. With Eq. 5, it is straightforward to design a dynamic programming algorithm to decoding, e.g., the Viterbi algorithm. In practice, we observe that the approximation yields good results. Later experiments on word alignment will present this in detail.", "cite_spans": [ { "start": 359, "end": 378, "text": "Liang et al. (2006)", "ref_id": "BIBREF27" }, { "start": 446, "end": 448, "text": "D)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Domain-conditioned Decoding", "sec_num": "5" }, { "text": "a = argmax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain-conditioned Decoding", "sec_num": "5" }, { "text": "a = argmax a D P (f, a| e, D) P (e| D)P (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain-conditioned Decoding", "sec_num": "5" }, { "text": "In the following experiments, we use three heterogeneous English-Spanish corpora consisting of 1M , 2M and 4M sentence pairs respectively. These corpora combine two parts. The first part respectively 0.7M , 1.7M and 3.7M is collected from multiple domains and resources including EuroParl (Koehn, 2005) , Common Crawl, United Nation, News Commentary. The second part consists of three domainexemplifying samples consisting of roughly 100K sentence pairs for each one (total 300K). Each of these three samples (manually collected by a commercial partner) exemplifies a specific domain related to Legal, Hardware and Pharmacy.", "cite_spans": [ { "start": 289, "end": 302, "text": "(Koehn, 2005)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6" }, { "text": "Outlook In Section 7 we examine the word alignment yielded by the HMM alignment model and our latent domain HMM alignment model. In Section 8 we proceed further to examine the translation produced by derived SMT systems. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6" }, { "text": "For alignment accuracy evaluation, we use a data set of 100 sentence pairs with their \"golden\" alignment from Graca et al. (2008) . Here, the golden alignment consists of sure links (S) and possible links (P ) for each sentence pair. Counting the set of generating alignment links (A), we report the word alignment accuracy by precision ( |A\u2229P | |P | ), recall ( |A\u2229S| |S| ), alignment error rate (AER) (1 \u2212 |A\u2229P |+|A\u2229S| |A|+|S| ) (Och and Ney, 2003) . 9 For all experiments, we use the same training configuration for both the baseline/the latent domain alignment model: 5 iterations for IBM model 1/the latent domain model; 3 iterations for HMM alignment model/the latent domain model. For evaluation, we first align the sentence pairs in both directions and then symmetrize them using the growdiag-final heuristic (Koehn et al., 2003) .", "cite_spans": [ { "start": 110, "end": 129, "text": "Graca et al. (2008)", "ref_id": "BIBREF20" }, { "start": 431, "end": 450, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF29" }, { "start": 453, "end": 454, "text": "9", "ref_id": null }, { "start": 817, "end": 837, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Word Alignment Experiment", "sec_num": "7" }, { "text": "For reference we also report the performance of a considerably more expressive Model 4, capable of capturing more structure, but at the expense of intractable inference. Using MGIZA++ (Gao and Vo-gel, 2008) , we run 5 iterations for training Model 1, 3 iterations for training the HMM alignment model, Model 3 and Model 4.", "cite_spans": [ { "start": 184, "end": 206, "text": "(Gao and Vo-gel, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Alignment Experiment", "sec_num": "7" }, { "text": "We first examine the binary case, where we are given domain information in advance for each kind of samples only, e.g., Legal, or Pharmacy, or Hardware. For the different sizes of the heterogeneous data (1M , 2M and 4M ) the seed sample size is thus 10%, 5% and 2.5% respectively. Note that in such cases, training the latent domain alignment model induces two domain-conditioned statistics: in-domain vs. out-domain (D 1 and D 2 respectively). Once the model is trained, we combine the induced domain-conditioned statistics together (Eq. 5) and examine the produced word alignment output. Table 1 presents the results. Most importantly, it shows that as long as providing domain information for reasonably large enough data, learning the latent domain alignment model notably improves the word alignment accuracy. For instance, given in advance the domain information for a sample of 10%, and 5% of the heterogeneous corpora, our model consistently improves the word alignment accuracy in all cases. Meanwhile, given in advance the domain information for a relatively small sample of 2.5% of the heterogeneous data, the results are mixed. We obtain a good performance/slightly better performance/worse performance with the case of Hardware/Legal/Pharmacy respectively.", "cite_spans": [], "ref_spans": [ { "start": 590, "end": 597, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Learning with Single Domain", "sec_num": "7.1" }, { "text": "To have an idea what the induced statistics look like, we investigate their conditional entropy. Here, we present the conditional entropy for the domain-confused/-conditioned word translation statistics induced from the HMM alignment model/its latent domain model. Note that similar results are observed for transition tables. Formally, for a translation table, F, E , its conditional entropy, H(F | E) can be estimated from its possible word pairs, e, f : H(F | E) = \u2212 e P (e) f P (f | e) log P (f | e). Table 2 reveals that the induced D 1 -conditioned statistics need much less bits to represent than the induced domain-confused statistics, e.g., 1124.43, 1104.58, 1115.52 vs. 1348.53 . This implies the induced D 1conditioned statistics are much more predictable compared to the domain-confused statistics. Meanwhile, the induced D 2 -conditioned statistics are similar to the domain-confused statistics in terms of the conditional entropy, e.g., 1354.58, 1385.35, 1342.54 vs. 1348.53. ", "cite_spans": [ { "start": 650, "end": 687, "text": "1124.43, 1104.58, 1115.52 vs. 1348.53", "ref_id": null }, { "start": 951, "end": 989, "text": "1354.58, 1385.35, 1342.54 vs. 1348.53.", "ref_id": null } ], "ref_spans": [ { "start": 505, "end": 512, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "What do domain-conditioned statistics look like?", "sec_num": null }, { "text": "It would be more interesting to learn the latent domain alignment model for multiple domains, rather than learning with each of them separately. In detail, using all the seed samples from different domains, we aim to learn four different domain-conditioned statistics simultaneously. Under this setting, we obtain good results, as described in Table 1 . For the two cases with the training corpora of 2M and 4M sentence pairs respectively, learning with the combining domain prior knowledge produces the best word alignment accuracy compared to the rest. In the last case with the training corpus of 1M sentence pairs, learning with the combining domain prior knowledge produces compatible with the case of Hardware, i.e., the best binary domain case. Table 1 also reveals that the performance of our model approaches Model 4, even though Model 4 is much more complex and computationally expensive.", "cite_spans": [], "ref_spans": [ { "start": 344, "end": 351, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 752, "end": 759, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Learning with Multiple Domains", "sec_num": "7.2" }, { "text": "We also investigate the relation between the number of domain-conditioned statistics \"involved\" in the Viterbi decoding (Eq. 5) and the word alignment accuracy. Table 3 presents the results in case of using only the induced D 1 -/, D 2 -/, D 3 -/, D 4conditioned statistics separately, and also using their different combinations. Interestingly, we observe that using more domain-conditioned statistics for decoding incrementally improves the word alignment accuracy over the heterogeneous data. While the domain-conditioned statistics are very different in their characteristics from each other, the results reveal how they are complementary to the others, conveying a mix of domains for each sentence pair. Finally, it is also tempting to make a comparison between the hard vs. soft domain assignment in Viterbi decoding. Here, for hard domain decision we simply do decoding with the following objec-tive function:\u00e2 = argmax a P (f, a| e,D), wher\u00ea D = argmax D P (D| e). Table 3 presents the results. It reveals that a soft domain assignment on the domain of sentence pairs results in a better alignment accuracy than a hard domain assignment. 10", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 973, "end": 980, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Domain-conditioned statistics combination", "sec_num": null }, { "text": "In this section, we investigate the contribution of our model in terms of the translation accuracy. Here, we run experiments on the heterogeneous corpora of 1M, 2M, and 4M sentence pairs, testing the translation accuracy over four different domain-specific test sets related to News, Pharmacy, Legal, and Hardware.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "We use a standard state-of-the-art phrase-based system as the baseline. Our dense features include MOSES (Koehn et al., 2007) baseline features, plus hierarchical lexicalized reordering model features (Galley and Manning, 2008) , and the word-level feature derived from IBM model 1 score, c.f., (Och et al., 2004) . 11 The interpolated 5-grams LMs with Kneser-Ney are trained on a very large monolingual corpus of 2B words. We tune the systems using kbest batch MIRA (Cherry and Foster, 2012) . Finally, we use MOSES (Koehn et al., 2007) as decoder.", "cite_spans": [ { "start": 105, "end": 125, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF26" }, { "start": 201, "end": 227, "text": "(Galley and Manning, 2008)", "ref_id": "BIBREF14" }, { "start": 295, "end": 313, "text": "(Och et al., 2004)", "ref_id": "BIBREF30" }, { "start": 316, "end": 318, "text": "11", "ref_id": null }, { "start": 467, "end": 492, "text": "(Cherry and Foster, 2012)", "ref_id": "BIBREF5" }, { "start": 517, "end": 537, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "Our system has exactly the same setting with the baseline, except: (1) To learn the translation, we use the alignment result derived from our latent domain HMM alignment model, rather than the HMM alignment model; and (2) We replace the word-level feature with our four domain-conditioned word-level features derived from the latent domain IBM model 1. Here, note that our latent model is learned with the supervision from the combining domain knowledge of all three domain-specific seed samples. 10 Note that similar results are also observed for training, in which a soft domain assignment using soft EM produces better alignment accuracy than a hard domain assignment using hard EM. (See (Gao et al., 2011) for reference to hard domain assignment to training data.) This is perhaps due to the characteristics of the data we use. For instance, News sentence pairs are useful for translating Legal, Financial or EuroParl to varying degrees.", "cite_spans": [ { "start": 497, "end": 499, "text": "10", "ref_id": null }, { "start": 691, "end": 709, "text": "(Gao et al., 2011)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "11 For every phrase pair f ,\u1ebd with their length of mf and l\u1ebd respectively, the lexical feature estimates a probability in Model 1 style between their word pairs fj, ei (i.e. P (f |\u1ebd) = l\u1ebd mf j=1 l\u1ebd i=1 P (fj|ei)). Note that adding word-level features from both translation sides does not help much, as observed by (Och et al., 2004) . We thus add only an one from a translation side. Table 4 : Metric scores for the systems, which are averages over multiple runs. Bold results indicate that the comparison is significant over the baseline.", "cite_spans": [ { "start": 314, "end": 332, "text": "(Och et al., 2004)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 384, "end": 391, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "For the News translation task, we tune systems on the News-test 2008 of 2, 051 sentence pairs and test them on the News-test 2013 of 3, 000 sentence pairs from the WMT 2013 shared task (Bojar et al., 2013) . For the Pharmacy, Legal, and Hardware translation tasks, we tune systems on three domain-specific dev sets of 1, 000 sentence pairs and test them on three domain-specific test sets of 1, 016, 1, 326 and 1, 721 sentence pairs. We report three metrics -BLEU (Papineni et al., 2002) , METEOR (Denkowski and Lavie, 2011) and TER (Snover et al., 2006) , with statistical significance at 95% confidence interval under paired bootstrap re-sampling. 12 For every system reported, we run the optimizer three times, before running MultEval (Clark et al., 2011) for resampling and significance testing.", "cite_spans": [ { "start": 185, "end": 205, "text": "(Bojar et al., 2013)", "ref_id": "BIBREF1" }, { "start": 464, "end": 487, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF31" }, { "start": 497, "end": 524, "text": "(Denkowski and Lavie, 2011)", "ref_id": "BIBREF10" }, { "start": 533, "end": 554, "text": "(Snover et al., 2006)", "ref_id": "BIBREF35" }, { "start": 738, "end": 758, "text": "(Clark et al., 2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "Data BLEU\u2191 METEOR\u2191 TER\u2193 1M +1.0 +0.4 -0.9 2M +1.4 +0.6 -1.3 4M", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "+0.7 +0.3 -0.5 Table 5 : Averaged improvements across the tasks.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "Results are in Table 4 , showing significant improvements across four different test sets over different heterogeneous corpora sizes. Table 5 gives a summary of the improvements. On average, over heterogeneous corpora of 1M, 2M and 4M sentence pairs, our system outperforms the baseline by 1.0 BLEU, 1.4 BLEU and 0.7 BLEU, respectively. Finally, we observe that our system produces comparably good performance to the MGIZA++-based system. When 1M data is considered, on three of four tasks, our system produces at least compatible translation accuracy to the corresponding MGIZA++-based system.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": null }, { "start": 134, "end": 141, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "Further analysis reveals that the improvement is due to not only the reduction in alignment error rate, but also the use of the domain-sensitive lexical features. Moreover, the domain-sensitive lexical features is particularly useful when the domain of the test data matches with the domain of seed samplers. This is also widely observed in the literature, e.g., see (Eidelman et al., 2012; Hasler et al., 2014; Hu et al., 2014) .", "cite_spans": [ { "start": 367, "end": 390, "text": "(Eidelman et al., 2012;", "ref_id": "BIBREF11" }, { "start": 391, "end": 411, "text": "Hasler et al., 2014;", "ref_id": "BIBREF22" }, { "start": 412, "end": 428, "text": "Hu et al., 2014)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Experiment", "sec_num": "8" }, { "text": "In terms of domain-conditioned statistics for word alignment, a distantly related research line (Tam et al., 2007; Zhao and Xing, 2008) focuses on using document topics to improve the word alignment. In terms of learning word alignment with partial supervision, another distantly related research line focuses on semi-supervised training with partial manual alignments (Fraser and Marcu, 2006; . Finally, recent work also focuses on data selection (Kirchhoff and Bilmes, 2014; Cuong and Sima'an, 2014b) , mixture models (Carpuat et al., 2014) , instance weighting (Foster et al., 2010) and latent variable models (Cuong and Sima'an, 2014a) over heterogeneous corpora.", "cite_spans": [ { "start": 96, "end": 114, "text": "(Tam et al., 2007;", "ref_id": "BIBREF36" }, { "start": 115, "end": 135, "text": "Zhao and Xing, 2008)", "ref_id": "BIBREF39" }, { "start": 369, "end": 393, "text": "(Fraser and Marcu, 2006;", "ref_id": "BIBREF13" }, { "start": 448, "end": 476, "text": "(Kirchhoff and Bilmes, 2014;", "ref_id": "BIBREF24" }, { "start": 477, "end": 502, "text": "Cuong and Sima'an, 2014b)", "ref_id": "BIBREF8" }, { "start": 520, "end": 542, "text": "(Carpuat et al., 2014)", "ref_id": "BIBREF3" }, { "start": 564, "end": 585, "text": "(Foster et al., 2010)", "ref_id": "BIBREF12" }, { "start": 613, "end": 639, "text": "(Cuong and Sima'an, 2014a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Conclusion", "sec_num": "9" }, { "text": "One main contribution of this work is the novelty of exploring the quality of word alignment in heterogeneous corpora. This, surprisingly, has not received much attention thus far (see the study of Bach et al. (2008) and Gao et al. (2011) for reference in the literature). Another major contribution of this work is a learning framework for latent domain word alignment with partial supervision using seed domains. We present its benefits for improving not only the word alignment accuracy, but also the translation accuracy resulting SMT systems produce. We hope this study sparks a new research direction for using domain samples, which is cheap to gather, but has not been exploited before.", "cite_spans": [ { "start": 198, "end": 216, "text": "Bach et al. (2008)", "ref_id": "BIBREF0" }, { "start": 221, "end": 238, "text": "Gao et al. (2011)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Conclusion", "sec_num": "9" }, { "text": "One obvious direction for future work might be to integrate the model into fertility-based alignment models (Brown et al., 1993) , as well as other recently advanced alignment frameworks, e.g., (Simion et al., 2013; Tamura et al., 2014; Chang et al., 2014) . Another interesting direction might be to integrate our model into advanced mixing multiple translation models, improving SMT systems trained on the heterogeneous data (Razmara et al., 2012; Sennrich et al., 2013; Carpuat et al., 2014) . Finally, an open question is whether it is possible to learn the latent domain alignment model in a fully unsupervised style. This challenge deserves more attention in future work.", "cite_spans": [ { "start": 108, "end": 128, "text": "(Brown et al., 1993)", "ref_id": "BIBREF2" }, { "start": 194, "end": 215, "text": "(Simion et al., 2013;", "ref_id": "BIBREF34" }, { "start": 216, "end": 236, "text": "Tamura et al., 2014;", "ref_id": "BIBREF37" }, { "start": 237, "end": 256, "text": "Chang et al., 2014)", "ref_id": "BIBREF4" }, { "start": 427, "end": 449, "text": "(Razmara et al., 2012;", "ref_id": "BIBREF32" }, { "start": 450, "end": 472, "text": "Sennrich et al., 2013;", "ref_id": "BIBREF33" }, { "start": 473, "end": 494, "text": "Carpuat et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Conclusion", "sec_num": "9" }, { "text": "The \"full\" formula for transition probabilities would be P (aj| aj\u22121, I). For convenience, we ignore I in our presentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Its time complexity is O(J \u00d7 I 2 ) for each sentence pair f, e with their length J and I respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that P\u0398 D (\u2022| \u2022, D) = a P\u0398 D (\u2022, a| \u2022, D)and it can be thus computed efficiently using dynamic programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "During the initialization, we assume that the pool of the rest sentence pairs in the heterogeneous data is the exemplifying sample of the out-domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Alternative solutions could be Lagrangian relaxation-based decoder(DeNero and Macherey, 2011;Chang et al., 2014).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that better results correspond to larger Precision, Recall and to smaller AER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that better results correspond to larger BLEU, ME-TEOR and to smaller TER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are indebted to Ivan Titov and three anonymous reviewers for their constructive comments on earlier versions. The first author is supported by the EX-PERT (EXPloiting Empirical appRoaches to Translation) Initial Training Network (ITN) of the European Union's Seventh Framework Programme. The second author is supported by VICI grant nr. 277-89-002 from the Netherlands Organization for Scientific Research (NWO).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Improving word alignment with language model based confidence scores", "authors": [ { "first": "Nguyen", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nguyen Bach, Qin Gao, and Stephan Vogel. 2008. Im- proving word alignment with language model based confidence scores. In Proceedings of the Third Work- shop on Statistical Machine Translation.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Findings of the 2013 Workshop on Statistical Machine Translation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Eighth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Comput. Linguist", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathemat- ics of statistical machine translation: parameter esti- mation. Comput. Linguist., 19:263-311, June.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Linear mixture models for robust machine translation", "authors": [ { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marine Carpuat, Cyril Goutte, and George Foster. 2014. Linear mixture models for robust machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A constrained viterbi relaxation for bidirectional word alignment", "authors": [ { "first": "Yin-Wen", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yin-Wen Chang, Alexander M. Rush, John DeNero, and Michael Collins. 2014. A constrained viterbi relax- ation for bidirectional word alignment. In Proceedings of ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Batch tuning strategies for statistical machine translation", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2012, "venue": "Proceedings of NAACL HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Pro- ceedings of NAACL HLT.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability", "authors": [ { "first": "Jonathan", "middle": [ "H" ], "last": "Clark", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of HLT: Short Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer insta- bility. In Proceedings of HLT: Short Papers.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Latent domain phrase-based models for adaptation", "authors": [ { "first": "Hoang", "middle": [], "last": "Cuong", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Sima'an", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoang Cuong and Khalil Sima'an. 2014a. Latent do- main phrase-based models for adaptation. In Proceed- ings of EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Latent domain translation models in mix-of-domains haystack", "authors": [ { "first": "Hoang", "middle": [], "last": "Cuong", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Sima'an", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoang Cuong and Khalil Sima'an. 2014b. Latent do- main translation models in mix-of-domains haystack. In Proceedings of COLING.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Model-based aligner combination using dual decomposition", "authors": [ { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John DeNero and Klaus Macherey. 2011. Model-based aligner combination using dual decomposition. In Proceedings of ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems", "authors": [ { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT '11", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evalu- ation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Transla- tion, WMT '11.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Topic models for dynamic translation model adaptation", "authors": [ { "first": "Vladimir", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ACL: Short Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Eidelman, Jordan Boyd-Graber, and Philip Resnik. 2012. Topic models for dynamic translation model adaptation. In Proceedings of ACL: Short Pa- pers.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Discriminative instance weighting for domain adaptation in statistical machine translation", "authors": [ { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Foster, Cyril Goutte, and Roland Kuhn. 2010. Discriminative instance weighting for domain adapta- tion in statistical machine translation. In Proceedings of EMNLP.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semisupervised training for statistical word alignment", "authors": [ { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Fraser and Daniel Marcu. 2006. Semi- supervised training for statistical word alignment. In Proceedings of ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A simple and effective hierarchical phrase reordering model", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Posterior regularization for structured latent variable models", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "J. Mach. Learn. Res", "volume": "11", "issue": "", "pages": "2001--2049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Jo\u00e3o Gra\u00e7a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for struc- tured latent variable models. J. Mach. Learn. Res., 11:2001-2049, August.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Parallel implementations of word alignment tool", "authors": [ { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2008, "venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Gao and Stephan Vogel. 2008. Parallel implemen- tations of word alignment tool. In Software Engineer- ing, Testing, and Quality Assurance for Natural Lan- guage Processing, SETQA-NLP '08.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Consensus versus expertise: A case study of word alignment with mechanical turk", "authors": [ { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, CSLDAMT '10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Gao and Stephan Vogel. 2010. Consensus versus expertise: A case study of word alignment with me- chanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, CSLDAMT '10.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A semi-supervised word alignment algorithm with partial manual alignments", "authors": [ { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Nguyen", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT '10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Gao, Nguyen Bach, and Stephan Vogel. 2010. A semi-supervised word alignment algorithm with par- tial manual alignments. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT '10.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Incremental training and intentional over-fitting of word alignment", "authors": [ { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Will", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Mei-Yuh", "middle": [], "last": "Hwang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of MT Summit XIII", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Gao, Will Lewis, Chris Quirk, and Mei-Yuh Hwang. 2011. Incremental training and intentional over-fitting of word alignment. In Proceedings of MT Summit XIII.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Building a golden collection of parallel multi-language word alignment", "authors": [ { "first": "Joao", "middle": [], "last": "Graca", "suffix": "" }, { "first": "Joana", "middle": [ "Paulo" ], "last": "Pardal", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Coheur", "suffix": "" }, { "first": "Diamantino", "middle": [], "last": "Caseiro", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joao Graca, Joana Paulo Pardal, Luisa Coheur, and Dia- mantino Caseiro. 2008. Building a golden collection of parallel multi-language word alignment. In Pro- ceedings of the Sixth International Conference on Lan- guage Resources and Evaluation (LREC'08).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning tractable word alignment models with complex constraints", "authors": [ { "first": "Joao", "middle": [], "last": "Graca", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "Comput. Linguist", "volume": "36", "issue": "3", "pages": "481--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joao Graca, Kuzman Ganchev, and Ben Taskar. 2010. Learning tractable word alignment models with com- plex constraints. Comput. Linguist., 36(3):481-504.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Dynamic topic adaptation for phrasebased mt", "authors": [ { "first": "Eva", "middle": [], "last": "Hasler", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eva Hasler, Phil Blunsom, Philipp Koehn, and Barry Haddow. 2014. Dynamic topic adaptation for phrase- based mt. In Proceedings of EACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Polylingual tree-based topic models for translation domain adaptation", "authors": [ { "first": "Yuening", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuening Hu, Ke Zhai, Vladimir Eidelman, and Jordan Boyd-Graber. 2014. Polylingual tree-based topic models for translation domain adaptation. In Proceed- ings of ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Submodularity for data selection in machine translation", "authors": [ { "first": "Katrin", "middle": [], "last": "Kirchhoff", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Kirchhoff and Jeff Bilmes. 2014. Submodularity for data selection in machine translation. In Proceed- ings of EMNLP.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of NAACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07. Philipp Koehn", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceed- ings of the 45th Annual Meeting of the ACL on Inter- active Poster and Demonstration Sessions, ACL '07. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Sum- mit, pages 79-86, Phuket, Thailand. AAMT, MMichi- gan0605 AAMT.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Alignment by agreement", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proceedings of HLT-NAACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Learning in graphical models. chapter A View of the EM Algorithm That Justifies Incremental, Sparse, and Other Variants", "authors": [ { "first": "M", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Neal", "suffix": "" }, { "first": "", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "355--368", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radford M. Neal and Geoffrey E. Hinton. 1999. Learn- ing in graphical models. chapter A View of the EM Al- gorithm That Justifies Incremental, Sparse, and Other Variants, pages 355-368. MIT Press.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Comput. Linguist", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Comput. Linguist., 29(1):19-51, March.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A smorgasbord of features for statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "David", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Eng", "suffix": "" }, { "first": "Viren", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin, and Dragomir Radev. 2004. A smorgasbord of features for statistical machine trans- lation. In HLT-NAACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Bleu: A method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic evalu- ation of machine translation. In Proceedings of ACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Mixing multiple translation models in statistical machine translation", "authors": [ { "first": "Majid", "middle": [], "last": "Razmara", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Baskaran Sankaran", "suffix": "" }, { "first": "", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Majid Razmara, George Foster, Baskaran Sankaran, and Anoop Sarkar. 2012. Mixing multiple translation models in statistical machine translation. In Proceed- ings of ACL.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A multi-domain translation model framework for statistical machine translation", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Aransa", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Holger Schwenk, and Walid Aransa. 2013. A multi-domain translation model framework for statistical machine translation. In Proceedings of ACL.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A convex alternative to ibm model 2", "authors": [ { "first": "Andrei", "middle": [], "last": "Simion", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Cliff", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrei Simion, Michael Collins, and Cliff Stein. 2013. A convex alternative to ibm model 2. Proceedings of EMNLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "L", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of AMTA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Bilingual lsa-based adaptation for statistical machine translation", "authors": [ { "first": "Yik-Cheung", "middle": [], "last": "Tam", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Lane", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Schultz", "suffix": "" } ], "year": 2007, "venue": "Machine Translation", "volume": "21", "issue": "4", "pages": "187--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yik-Cheung Tam, Ian Lane, and Tanja Schultz. 2007. Bilingual lsa-based adaptation for statistical machine translation. Machine Translation, 21(4):187-207.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Recurrent neural networks for word alignment model", "authors": [ { "first": "Akihiro", "middle": [], "last": "Tamura", "suffix": "" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2014. Recurrent neural networks for word alignment model. In Proceedings of ACL.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Hmm-based word alignment in statistical translation", "authors": [ { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical trans- lation. In Proceedings of COLING.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Hm-bitam: Bilingual topic exploration, word alignment, and translation", "authors": [ { "first": "Bing", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2008, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Zhao and Eric P. Xing. 2008. Hm-bitam: Bilingual topic exploration, word alignment, and translation. In Proceedings of NIPS.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Latent domain HMM alignment model. An additional latent layer representing domains has been conditioned on by both the rest two layers.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Figure 2. Note how domain-conditioned alignment statistics, P (\u2022| \u2022, D) contain their former domainconfused alignment statistics, P (\u2022| \u2022) as special case", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "e c(f |e; f, e, D) f f,e c(f |e; f, e, D) P (+) (i|i , D) = f,e c(i|i ; f, e, D) i f,e c(i|i ; f, e, D) P (+) (D) = f,e c(D; f, e) D f,e c(D; f, e) Pseudocode for the training algorithm for the latent domain HMM alignment model. Note that notation P", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "a|e, D)P (e|D)P (D).", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "num": null, "html": null, "content": "
ModelDomain PriorPrec.\u2191 \u2206Rec.\u2191 \u2206AER\u2193 \u2206
1 Million
Model 4 (ref.) -71.56 -64.59 -32.10 -
Baseline-66.95 -61.29 -36.00 -
2 Million
Model 4 (ref.) -74.13 -65.30 -30.56 -
Baseline-68.34 -61.58 -35.22 -
Pharmacy68.85 +0.51 62.58 +1.00 34.43 -0.79
LatentLegal Hardware69.98 +1.64 64.01 +2.43 33.13 -2.09 69.45 +1.11 63.23 +1.65 33.81 -1.41
Legal + Hardware + Software 71.51 +3.17 63.87 +2.29 32.53 -2.69
4 Million
Model 4 (ref.) -75.53 -65.95 -29.58 -
Baseline-69.37 -64.30 -33.26 -
Pharmacy69.69 +0.32 62.80 -1.50 33.94 +0.68
LatentLegal Hardware70.51 +1.14 63.94 -0.36 32.93 -0.33 71.75 +2.38 64.44 +0.14 32.10 -1.16
Legal + Hardware +
", "type_str": "table", "text": "Latent Pharmacy 67.85 +0.90 61.72 +0.43 35.36 -0.64 Legal 67.57 +0.62 62.29 +1.00 35.17 -0.83 Hardware 69.41 +2.46 63.58 +2.29 33.63 -2.37 Legal + Hardware + Software 69.64 +2.69 63.30 +2.01 33.68 -2.32 Software 72.16 +2.79 64.30 \u00b10.0 31.99 -1.27" }, "TABREF2": { "num": null, "html": null, "content": "", "type_str": "table", "text": "Alignment accuracy over heterogeneous corpora." }, "TABREF4": { "num": null, "html": null, "content": "
", "type_str": "table", "text": "Conditional entropy of the statistics." }, "TABREF6": { "num": null, "html": null, "content": "
", "type_str": "table", "text": "Domain-conditioned statistics combination for Viterbi decoding. The reported results are for the heterogeneous corpus of 1M sentence pairs. Similar results are observed for other training data." } } } }