{ "paper_id": "O08-5005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:33.289742Z" }, "title": "Improved Minimum Phone Error based Discriminative Training of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "Shih-Hung", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": {} }, "email": "" }, { "first": "Fang-Hui", "middle": [], "last": "Chu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": {} }, "email": "" }, { "first": "Yueng-Tien", "middle": [], "last": "Lo", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": {} }, "email": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": {} }, "email": "berlin@csie.ntnu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper considers minimum phone error (MPE) based discriminative training of acoustic models for Mandarin broadcast news recognition. We present a new phone accuracy function based on the frame-level accuracy of hypothesized phone arcs instead of using the raw phone accuracy function of MPE training. Moreover, a novel data selection approach based on the frame-level normalized entropy of Gaussian posterior probabilities obtained from the word lattice of the training utterance is explored. It has the merit of making the training algorithm focus much more on the training statistics of those frame samples that center nearly around the decision boundary for better discrimination. The underlying characteristics of the presented approaches are extensively investigated, and their performance is verified by comparison with the standard MPE training approach as well as the other related work. Experiments conducted on broadcast news collected in Taiwan demonstrate that the integration of the frame-level phone accuracy calculation and data selection yields slight but consistent improvements over the baseline system.", "pdf_parse": { "paper_id": "O08-5005", "_pdf_hash": "", "abstract": [ { "text": "This paper considers minimum phone error (MPE) based discriminative training of acoustic models for Mandarin broadcast news recognition. We present a new phone accuracy function based on the frame-level accuracy of hypothesized phone arcs instead of using the raw phone accuracy function of MPE training. Moreover, a novel data selection approach based on the frame-level normalized entropy of Gaussian posterior probabilities obtained from the word lattice of the training utterance is explored. It has the merit of making the training algorithm focus much more on the training statistics of those frame samples that center nearly around the decision boundary for better discrimination. The underlying characteristics of the presented approaches are extensively investigated, and their performance is verified by comparison with the standard MPE training approach as well as the other related work. Experiments conducted on broadcast news collected in Taiwan demonstrate that the integration of the frame-level phone accuracy calculation and data selection yields slight but consistent improvements over the baseline system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Speech is the primary and the most convenient means of communication between individuals. Due to the successful development of much smaller electronic devices and the popularity of wireless communication and networking, it is widely believed that speech will possibly serve When considering the development of an ASR system, acoustic modeling is always an indispensable and crucial ingredient we have to carefully manipulate. The purpose of acoustic modeling is to provide a method for calculating the likelihood of a speech utterance occurring given a word sequence. In principle, the word sequence can be decomposed into a sequence of phone-like (subword, e.g. INITIAL or FINAL in Mandarin Chinese) units or acoustic models, each of which is normally represented by a continuous density hidden Markov model (HMM), and the corresponding model parameters can be estimated from a corpus of orthographically transcribed training utterances using maximum likelihood (ML) training [Rabiner 1989 ]. The acoustic models can be alternatively trained with discriminative training algorithms, such as maximum mutual information (MMI) training [Bahl et al. 1986] and minimum phone error (MPE) training [Povey 2004; Kuo et al. 2006] . These algorithms were developed in an attempt to correctly discriminate the recognition hypotheses for the best recognition results rather than just to fit the model distributions as done by ML training; therefore, they have continuously been a focus of considerable active research in a wide variety of large vocabulary continuous speech recognition (LVCSR) tasks over the past few years. Moreover, in contrast to ML training, discriminative training considers not only the reference (or correct) transcript of a training utterance, but also the competing (or incorrect) hypotheses that are often obtained by performing LVCSR on the utterance.", "cite_spans": [ { "start": 977, "end": 990, "text": "[Rabiner 1989", "ref_id": "BIBREF23" }, { "start": 1134, "end": 1152, "text": "[Bahl et al. 1986]", "ref_id": "BIBREF1" }, { "start": 1192, "end": 1204, "text": "[Povey 2004;", "ref_id": "BIBREF20" }, { "start": 1205, "end": 1221, "text": "Kuo et al. 2006]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we consider minimum phone error (MPE) based discriminative training of acoustic models for Mandarin broadcast news recognition. In order to remedy the defect in the phone accuracy function of the MPE training algorithm, we present a new phone accuracy function based on the frame-level accuracy of hypothesized phone arcs. Moreover, a novel data selection approach based on the frame-level normalized entropy of Gaussian posterior probabilities obtained from the word lattice of the training utterance is explored, which has the merit of making the MPE training algorithm focus much more on the training statistics of those frame samples that center nearly around the decision boundary for better discrimination. The underlying characteristics of the presented approaches are extensively investigated and their performance is verified by comparison with the original MPE training approach as well as other related work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The remainder of this paper is organized as follows. In Section 2, the general background of MPE based acoustic model training is briefly reviewed. Section 3 elucidates our proposed new accuracy function for MPE training, and Section 4 presents two novel training data selection approaches based on frame-level normalized entropy information. The experimental setup is detailed in Section 5, and a series of speech recognition experiments is described in Section 6. Finally, we present the conclusions drawn from the research in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Given a training set of K acoustic vector sequences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "{ } 1 ,.., ,.., k K O O O O =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": ", the MPE criterion for acoustic model training aims to minimize the expected phone errors of these acoustic vector sequences using the following objective function [Povey and Woodland 2002] ", "cite_spans": [ { "start": 165, "end": 190, "text": "[Povey and Woodland 2002]", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ": 1 ( ) ( ) ( | ), lat k k K MPE k k k k W F R a w A c c WP WO \u03bb \u03bb = \u2208 = \u2211 \u2211 W", "eq_num": "(1)" } ], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "where \u03bb denotes a set of phone-like acoustic models; lat k W is the corresponding word lattice [Ortmanns et al. 1997] of k O obtained using LVCSR, as graphically illustrated in Figure 1 ; k W is one of the hypothesized word sequences in", "cite_spans": [ { "start": 95, "end": 117, "text": "[Ortmanns et al. 1997]", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 177, "end": 185, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "lat k W ; ( | ) k k P W O is the posterior probability of hypothesis k W given k O ; ( ) k RawAcc W", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "is the \"raw phone accuracy\" of k W in comparison to the corresponding reference transcript, which is typically computed as the sum of the phone accuracy measures of all phone hypotheses in k W . Then, the objective function in Equation (1) can be maximized by applying the Extended Baum-Welch algorithm [Gopalakrishnan et al. 1989] ", "cite_spans": [ { "start": 303, "end": 331, "text": "[Gopalakrishnan et al. 1989]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O O D D \u03b8 \u03b8 \u03c3 \u03bc \u03c3 \u03bc \u03b3 \u03b3 \u2212 + + = \u2212 \u2212 + (3) 1 , ( ) max(0, ), q M PE lat q k e K num k k hm qm q k ts q q h t \u03b3 \u03b3 \u03b3 = = \u2208 = = \u2211 \u2211 \u2211 W (4) 1 , ( ) max(0, ), q M PE lat q k e K den k k hm qm q k ts q q h t \u03b3 \u03b3 \u03b3 = = \u2208 = = \u2212 \u2211 \u2211 \u2211 W (5) ( ) ( ) 1 , ( ) max(0, )", "eq_num": ", q" } ], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= = \u2208 = = \u2211 \u2211 \u2211 W (6) ( ) ( ) 2 2 1 , ( ) max(0, )", "eq_num": ", q" } ], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= = \u2208 = = \u2211 \u2211 \u2211 W (7) ( ) , M PE k k k k q q q a v g c c \u03b3 \u03b3 = \u2212", "eq_num": "(8)" } ], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "where ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "lat k q q h \u2208 = W", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "denotes that a phone q arc belongs to the word lattice is actually based on the phone accuracies of phone arcs in the word lattice. For example, the raw phone accuracy for each word sequence k W in the lattice can be calculated in terms of the sum of the accuracy of each phone contained in k W [Povey and Woodland 2002] :", "cite_spans": [ { "start": 295, "end": 320, "text": "[Povey and Woodland 2002]", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ), k k q W R aw A cc W P honeA cc q \u2208 = \u2211", "eq_num": "(9)" } ], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "where ( ) PhoneAcc q is the raw phone accuracy for a phone arc q in k W , which can be defined as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "1 2 ( , ) / ( ), ( ) max , 1 ( , )/ ( ), j k j j j z Z j j j e z q l z z q PhoneAcc q e z q l z z q \u2208 \u2212 + = \u23a7 \u23ab \u23aa \u23aa = \u23a8 \u23ac \u2212 + \u2260 \u23aa \u23aa \u23a9 \u23ad (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "where k Z is the set of phone labels in the corresponding reference transcript, and ( , ) j e z q is the overlap length in frames (or in time) for a phone label j z in k Z and a hypothesized phone arc q in k W , ) ( j z l is the length in frames for j z . We can observe from Equations (4)-(8), for MPE training, those hypotheses having raw phone accuracies higher than the average can provide positive contributions, and vice-versa for those hypotheses with accuracies lower than the average. Interested readers can refer to [Povey 2004; Kuo et al. 2006] for more derivation details of MPE training.", "cite_spans": [ { "start": 526, "end": 538, "text": "[Povey 2004;", "ref_id": "BIBREF20" }, { "start": 539, "end": 555, "text": "Kuo et al. 2006]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Review of Minimum Phone Error (MPE) Training", "sec_num": "2." }, { "text": "It is known that the standard MPE training approach has some drawbacks [Zheng and Stolcke 2005] . One of them is that MPE training does not sufficiently penalize deletion errors. In general, the original MPE objective function discourages insertion errors more than deletion and substitution errors. Inspired by the work of word lattice rescoring (or decoding) using frame-level accuracy information [Wessel et al. 2001] , in this paper we present an alternative phone accuracy function that can look into the frame-level phone accuracies of all hypothesized word sequences to replace the original raw phone accuracy function for MPE training [Liu et al. 2007a] . The frame-level phone accuracy function (FA) is defined as:", "cite_spans": [ { "start": 71, "end": 95, "text": "[Zheng and Stolcke 2005]", "ref_id": "BIBREF29" }, { "start": 400, "end": 420, "text": "[Wessel et al. 2001]", "ref_id": "BIBREF28" }, { "start": 643, "end": 661, "text": "[Liu et al. 2007a]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( , ) ( ) , 1 q q e k t s q q q Z t F ram eA cc q e s \u03b4 = = \u2212 + \u2211 (11) and ( ) ( ) ( ) 1 , ( , ) , ,", "eq_num": ", 0 1" } ], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k k k if q Z t q Z t if q Z t \u03b4 \u03c1 \u03c1 = \u23a7 \u23ab \u23aa \u23aa = \u23a8 \u23ac \u2212 \u2260 < < \u23aa \u23aa \u23a9 \u23ad", "eq_num": "(12)" } ], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "where ( ) k Z t is the phone label of the reference transcript k Z at frame t ; \u03c1 is a tunable positive parameter used to control the penalty if the phone arc q is incorrect in its label; and the value of ( ) FrameAcc q will range from \u03c1 \u2212 to 1. For each frame t , we thus can easily evaluate whether the phone arc of each hypothesized word sequence in the word lattice is identical to that of the reference transcript or not. Actually, the presented frame-level phone accuracy function emphasizes the deletion penalty on the incompletely correct phone arc; whereas the insertion and substitution errors of the hypothesized word sequences, as well as the errors caused by inaccurate time boundaries of the phone arcs, are also taken into consideration evenly. As illustrated in Figure 2 , given the reference phone transcript \"a-b-c\", the first hypothesized phone sequence \"a-b-c\" will be regarded as partially correct (with a score of two) using the original MPE raw phone accuracy function, as shown in Eq. 10; while the presented frame-level phone accuracy function, as shown in Eq. 11, will give it a score of 2.56 (with \u03c1 set to 0.1) by similarly taking into account the incorrect time boundaries of the associated phone arcs. On the other hand, for the second hypothesized phone sequence \"a-c\", it is obvious that there exists a deletion error of the phone arc \"b.\" Nevertheless, the original MPE raw phone accuracy function gives the second hypothesized phone sequence a score of two, which is equivalent to that of the first hypothesized phone sequence, and the phone arcs (\"a\" and \"c\") of it will be treated as completely correct. While using our proposed frame-level phone accuracy function, both of the two phone arcs in the second hypothesized phone sequence will instead be treated as partially correct by considering the frame-level substitution errors. Thus, the frame-level phone accuracy function will only assign a total score of 1.27 (with \u03c1 set to 0.1) to the second hypothesized phone sequence.", "cite_spans": [], "ref_spans": [ { "start": 778, "end": 786, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "Another frame-level phone accuracy function that uses the Sigmoid function to normalize the phone accuracy value in a range between -1 and 1 is also investigated in this paper (SFA):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 ( ) 1, 1 exp( ) SigFrameAcc q net \u03b1 = \u2212 + \u2212 \u22c5 (13) and ( ) ( , ), q q e k t s n et q z t \u03b4 = = \u2211", "eq_num": "(14)" } ], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "( ) ( , ) k q Z t \u03b4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "was previously defined in Eq. 12, \u03b1 is a positive parameter that controls the slope of the Sigmoid function (the larger the value of \u03b1 , the steeper the slope of the function). Notice that, the purpose of the above two new phone accuracy functions is not to approximate the standard Levenshtein distance measure, but instead to sufficiently penalize the frame-level substitution errors of each hypothesized phone arc that may be neglected by the original raw phone accuracy function. From now on, the proposed improved MPE training algorithms, by adopting either one of the two frame-level phone accuracy functions defined in Eqs. 11and 13, are referred to as the maximum frame accuracy training (denoted as MFA) and the maximum Sigmoid-based frame accuracy training (denoted as MSFA), respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "In recent years, there also has been considerable independent research on the design of new phone accuracy functions for improving MPE training [Zheng and Stolcke 2005; Gibson et al. 2006; Du et al. 2006; Povey et al. 2007] . As one example, the minimum phone frame error (MPFE) criterion [Zheng and Stolcke 2005 ] simply counts the number of frames of the recognition hypothesis having correct phone labels in comparison to the reference transcript, which is quite similar to our proposed frame-level accuracy functions. The major differences are that MPFE gives a score of zero (but not a negative value as done by MFA and MSFA) to the frames with incorrect phone labels, and the corresponding phone accuracy value is not normalized by the phone duration or the Sigmoid function. As another example, the state-level minimum Bayes risk (sMBR) criterion [Gibson et al. 2006; Povey et al. 2007 ] uses the HMM state-level information to fulfill label matching. As still another example, the minimum divergence (MD) criterion [Jun Du et al. 2006] defines phone accuracy on the basis of the Kullback-Leibler divergence between the corresponding acoustic models of the reference and hypothesized phone labels. More detailed elucidation and comparison of these alternative phone accuracy functions can be found in [Povey et al. 2007] .", "cite_spans": [ { "start": 144, "end": 168, "text": "[Zheng and Stolcke 2005;", "ref_id": "BIBREF29" }, { "start": 169, "end": 188, "text": "Gibson et al. 2006;", "ref_id": "BIBREF5" }, { "start": 189, "end": 204, "text": "Du et al. 2006;", "ref_id": "BIBREF4" }, { "start": 205, "end": 223, "text": "Povey et al. 2007]", "ref_id": "BIBREF21" }, { "start": 289, "end": 312, "text": "[Zheng and Stolcke 2005", "ref_id": "BIBREF29" }, { "start": 854, "end": 874, "text": "[Gibson et al. 2006;", "ref_id": "BIBREF5" }, { "start": 875, "end": 892, "text": "Povey et al. 2007", "ref_id": "BIBREF21" }, { "start": 1023, "end": 1043, "text": "[Jun Du et al. 2006]", "ref_id": null }, { "start": 1308, "end": 1327, "text": "[Povey et al. 2007]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "Although a discriminative training approach using the finite state transducer, retaining the corresponding recognition hypotheses of the training acoustic vector sequence, for calculating the exact Levenshtein distance based word error rate was also proposed recently [Heigold et al. 2005] , no improved results but only degraded results were demonstrated by the approach.", "cite_spans": [ { "start": 268, "end": 289, "text": "[Heigold et al. 2005]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "New Accuracy Functions", "sec_num": "3." }, { "text": "In this section, we elucidate the theoretical roots of frame-level training data selection using the entropy information, as well as two variant implementations to achieve this goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frame-Level Training Data Selection", "sec_num": "4." }, { "text": "We propose the use of the entropy information to select the frame-level training statistics for the MPE training. The normalized entropy of a training frame sample i can be defined as [Liu et al. 2007b] :", "cite_spans": [ { "start": 184, "end": 202, "text": "[Liu et al. 2007b]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Normalized Frame-Level Entropy", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 ( ) ( ) log , log ( ) lat k k k q m k m q q t qm E t t N t \u03b3 \u03b3 \u2208 \u2208 = \u22c5 \u2211 \u2211 W", "eq_num": "(15)" } ], "section": "Normalized Frame-Level Entropy", "sec_num": "4.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized Frame-Level Entropy", "sec_num": "4.1" }, { "text": ") (t k qm \u03b3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized Frame-Level Entropy", "sec_num": "4.1" }, { "text": "is the posterior probability for mixture component m of phone arc q at frame t , which is calculated from the word lattice; t N is the number of Gaussian mixtures which have nonzero posterior probabilities at frame t ( 0 ) ( > t k qm \u03b3 ); and the value of ( ) k E t will range from zero to one [Misra and Bourlard 2005] . Here, we use a hypothetical example of binary classification to illustrate the relationship between the decision boundary and the normalized entropy. As shown in Figure 3 , the decision boundary constructed based on the posterior probability of the class 1 C can discriminate most of the samples belonging to 1 C (depicted as squares) from those belonging to 2 C (depicted as circles). In general, the decision boundary is at the value of 0.5 for the posterior probability of 1 C and the class posterior probabilities can be used to calculate the normalized entropies of the samples. Thus, the samples (solid circles or squares) located near the decision boundary will have normalized entropies close to one, while those (hollow circles or squares) located far away from the decision boundary will have normalized entropies close to zero. For the speech recognition task, two extreme cases are considered as follows. First, if the normalized entropy measure of a frame sample i is close to zero, it means that the corresponding frame-level posterior probabilities will be dominated by one specific mixture component. From the viewpoint of frame sample classification using posterior probabilities, the difference of probabilities between the true (correct) mixture component and the competing (incorrect) ones is larger. That is, the frame sample i is actually located far from the decision boundary. On the other hand, if the normalized entropy measure is close to one, it means that the posterior probabilities of mixture components tend to be uniformly distributed. Then, the frame sample i is instead located near the decision boundary. In a word, the normalized entropy measure to some extent can define a kind of margin for the selection of useful training frame samples. Therefore, we may take advantage of the normalized entropy measure to make the MPE training focus much more on the training statistics of those frame samples that center near the decision boundary for better sample discrimination and model generalization [Jiang et al. 2006; Li et al. 2006] .", "cite_spans": [ { "start": 294, "end": 319, "text": "[Misra and Bourlard 2005]", "ref_id": "BIBREF18" }, { "start": 2355, "end": 2374, "text": "[Jiang et al. 2006;", "ref_id": "BIBREF10" }, { "start": 2375, "end": 2390, "text": "Li et al. 2006]", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 484, "end": 492, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Normalized Frame-Level Entropy", "sec_num": "4.1" }, { "text": "A straightforward implementation of frame-level training data selection is to define a threshold of the normalized entropy measure then completely discard the training statistics of those frame samples whose normalized entropy values fall below it. This can be viewed as a \"hard version\" of data selection. Figure 4 shows a histogram describing the relationship between the normalized entropy and the number of training speech frame samples used in this study. For example, the leftmost vertical bar denotes the number of training speech frame samples whose normalized entropy values are in the range of 0 to 0.05. The large number of frame samples belonging to the leftmost vertical bar also reveals that most of the training frame samples in fact are located far from the decision boundary; thus, they can be discarded if the threshold is appropriately set.", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 315, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Hard Version of Frame Sample Selection (HS)", "sec_num": "4.2" }, { "text": "We also attempt an alternative implementation (or a \"soft version\") of frame-level training data selection to emphasize the training statistics of those frame samples that are located near the decision boundary according to their normalized entropy values using the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Soft Version of Frame Sample Selection (SS)", "sec_num": "4.3" }, { "text": "(1 ( )),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Soft Version of Frame Sample Selection (SS)", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "MPE MPE k k q q k E t \u03b3 \u03b3 \u03c9 \u2032 = \u22c5 + \u22c5", "eq_num": "(16)" } ], "section": "Soft Version of Frame Sample Selection (SS)", "sec_num": "4.3" }, { "text": "where \u03c9 is tunable positive parameter whose value ranges from 0 to 1. As indicated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Soft Version of Frame Sample Selection (SS)", "sec_num": "4.3" }, { "text": "Equation 16, if the normalized entropy value ( ) k E t of a training frame sample i is higher, then its corresponding training statistics will be emphasized. On the contrary, for a frame sample with a lower entropy value, its training statistics will be deemphasized when compared to those of the frame samples with higher normalized entropy values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Soft Version of Frame Sample Selection (SS)", "sec_num": "4.3" }, { "text": "In this section, we describe the speech and text data, as well as the large vocabulary continuous speech recognition system, employed in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5." }, { "text": "The speech corpus consisted of approximately 198 hours of MATBN (Mandarin Across Taiwan Broadcast News) Mandarin television news content [Wang et al. 2005] , which was collected by Academia Sinica and the Public Television Service Foundation of Taiwan between November 2001 and April 2003. All the speech materials were manually segmented into separate stories, each of which was spoken by one news anchor, several field reporters, and interviewees. Some stories contained background noise, speech, and music. All 198 hours of speech data were accompanied by corresponding orthographic transcripts, of which about 25 hours of gender-balanced speech data of the field reporters collected from November 2001 to December 2002 was used to bootstrap the acoustic training. The training set consisted of more than five hundred thousand characters, and the average length of a word was 1.65 characters. Another set of about 1.5 hours of speech data of the filed reporters (more than twenty-six thousand characters) collected during 2003 was reserved for testing. The training and test data overlapped in speakers; roughly 30% of the test data was spoken by the field reporters whose previous recordings were also included in the 25-hour training data.", "cite_spans": [ { "start": 137, "end": 155, "text": "[Wang et al. 2005]", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Corpus and Acoustic Model Training", "sec_num": "5.1" }, { "text": "The acoustic models chosen for speech recognition were a silence model, 112 right-context-dependent INITIAL models, and 38 context-independent FINAL models. Each INITIAL model was represented by an HMM with 3 states, while each FINAL model had 4 states. Note that gender-independent models were used. The Gaussian mixture number per state ranged from 2 to 128, depending on the amount of training data. The acoustic models were first trained using the ML criterion and the Baum-Welch update formulas. The MPE-based acoustic model training was further applied to acoustic models pre-trained by the ML criterion. Both silence and short-pause labels were involved in the calculation of the raw ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Corpus and Acoustic Model Training", "sec_num": "5.1" }, { "text": "Initially, the recognition lexicon consisted of 67K words. A set of about 5K compound words was automatically derived using forward and backward bigram statistics [Saon and Padmanabhan 2001] and added to the lexicon to form a new lexicon of 72K words. The background language models used in this experiment were trigram and bigram models, which were estimated according to the ML criterion using a text corpus consisting of 170 million Chinese characters collected from the Central News Agency (CNA) in 2001 and 2002 (the Chinese Gigaword Corpus released by LDC). In implementation, the n-gram language models were trained with the SRI Language Modeling Toolkit [Stolcke 2000] .", "cite_spans": [ { "start": 163, "end": 190, "text": "[Saon and Padmanabhan 2001]", "ref_id": "BIBREF25" }, { "start": 662, "end": 676, "text": "[Stolcke 2000]", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon and N-gram Language Modeling", "sec_num": "5.2" }, { "text": "The front-end processing for speech recognition was performed with the HLDA-based (Heteroscedastic Linear Discriminant Analysis) data-driven Mel-frequency feature extraction approach [Kumar 1997 ] then processed by MLLT (Maximum Likelihood Linear Transformation) transformation [Saon et al. 2000] for feature de-correlation. In addition, utterance-based feature mean subtraction and variance normalization were applied to all the training and test speech.", "cite_spans": [ { "start": 183, "end": 194, "text": "[Kumar 1997", "ref_id": "BIBREF11" }, { "start": 278, "end": 296, "text": "[Saon et al. 2000]", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition System", "sec_num": "5.3" }, { "text": "The speech recognizer was implemented with a left-to-right frame-synchronous Viterbi tree-copy search and a lexical prefix tree of the lexicon [Aubert 2002] . For each speech frame, a beam pruning technique, which considered the decoding scores of path hypotheses together with their corresponding unigram language model look-ahead scores and syllable-level acoustic look-ahead scores [Chen et al. 2005] , was used to select the most promising path hypotheses. Moreover, if the word hypotheses ending at each speech frame had higher scores than a predefined threshold, their associated decoding information, such as the word start and end frames, the identities of current and predecessor words, and the acoustic score, were kept to build a word lattice for further language model rescoring. We used the word bigram language model in the tree search procedure and the trigram language model in the word lattice rescoring procedure [Ortmanns et al. 1997] .", "cite_spans": [ { "start": 143, "end": 156, "text": "[Aubert 2002]", "ref_id": "BIBREF0" }, { "start": 385, "end": 403, "text": "[Chen et al. 2005]", "ref_id": "BIBREF2" }, { "start": 931, "end": 953, "text": "[Ortmanns et al. 1997]", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition System", "sec_num": "5.3" }, { "text": "As it is known that there are no explicit marks, such as spaces or blanks, separating words in the Chinese language, the Chinese language often suffers from word tokenization problems. The performance evaluation metric used in Mandarin speech recognition usually is the character error rate (CER) rather than the word error rate (WER).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results", "sec_num": "6." }, { "text": "The acoustic models were trained with about 25 hours of speech utterances. The MPE training started with the acoustic models trained by 10 iterations of the ML training, and used the information contained in the associated word lattices of training utterances to accumulate the necessary statistics for model training. The ML-trained acoustic models yields a CER (Chinese Character Error Rate) of 23.64%, while the standard MPE training (denoted as MPE) indeed can provide a great boost to the acoustic models initially trained by ML consistently at all training iterations, as the curve \"MPE\" depicted in Figure 4 or the results shown in the leftmost column of Table 1.", "cite_spans": [], "ref_spans": [ { "start": 606, "end": 614, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Baseline System", "sec_num": "6.1" }, { "text": "In the following experiments, for fair comparison between our proposed methods and the baseline MPE training, the smoothing constant (i.e., the \u03c4 value of I-smoothing) [Povey and Woodland 2002; Povey 2004; Kuo et al. 2006 ] is set to be the same as that used in the baseline MPE training. It is known that this smoothing constant can be regarded as a kind of prior information which forces the HMM parameters estimated by the MPE training to center around that estimated by the ML training [Povey et al. 2007] .", "cite_spans": [ { "start": 168, "end": 193, "text": "[Povey and Woodland 2002;", "ref_id": "BIBREF22" }, { "start": 194, "end": 205, "text": "Povey 2004;", "ref_id": "BIBREF20" }, { "start": 206, "end": 221, "text": "Kuo et al. 2006", "ref_id": "BIBREF12" }, { "start": 490, "end": 509, "text": "[Povey et al. 2007]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline System", "sec_num": "6.1" }, { "text": "We first evaluate the performance of our proposed two frame-level phone accuracy functions, FA (corresponding to the MFA training) and SFA (corresponding to the MSFA training), as previously described in Section 3. As can be seen from Figure 5 , both MFA and MSFA training using two variant phone accuracy functions (MFA and MSFA) . outperform the standard MPE at higher training iterations, and MSFA is slightly better than MFA, though the difference between them is negligible at lower training iterations. On the other hand, we have observed from a series of experiments that, using the two variants of frame-level phone accuracy functions with different settings of the value of their parameter \u03c1 will give different penalties for insertions and deletions. For example, if the value of \u03c1 is set to be larger, insertion errors will be discouraged; while, if the value of \u03c1 is set to be smaller, the number of deletion errors will be decreased. More concretely, we can trade off insertion and deletion errors by appropriately adjusting the penalty parameter \u03c1 . Table 1 shows the results obtained for different parameter settings of the two variant phone accuracy functions, where the optimum setting for MFA is \u03c1 =0.5, while for MSFA is \u03c1 =0.1 and \u03b1 = 0.5. MFA ( \u03c1 =0.5) trained with 10 iterations (20.46%) leads to an absolute CER reduction of 0.31% over MPE trained with the same iterations (20.77%), which is equivalent to a condition where about 81 of the character recognition errors have been corrected. A significance test based on the standard NIST MAPSSWE [Gillick and Cox 1989 ] also indicates the statistical significance of such an improvement (p-value <0.001).", "cite_spans": [ { "start": 1568, "end": 1589, "text": "[Gillick and Cox 1989", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 235, "end": 243, "text": "Figure 5", "ref_id": "FIGREF6" }, { "start": 264, "end": 330, "text": "training using two variant phone accuracy functions (MFA and MSFA)", "ref_id": null }, { "start": 1064, "end": 1071, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments on Proposed Frame-level Phone Accuracy Functions", "sec_num": "6.2" }, { "text": "Iterations MPE MFA \u03c1 =0.1 MFA \u03c1 =0.3 MFA \u03c1 =0.5 MFA \u03c1 =0.8 MSFA \u03c1 =0.1 = \u03b1 0.5 MSFA \u03c1 =0.5 = \u03b1 0.5 MSFA \u03c1 =0.1 = \u03b1 1 MSFA \u03c1 =0.5 = \u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. CER results (%) obtained for different parameter settings of the MPE", "sec_num": null }, { "text": "We then compare our proposed new frame-level phone accuracy function (SFA) with the other alternative modifications (i.e., MPFE, sMBR and MD mentioned in Section 3) to the phone accuracy function for the MPE-based discriminative training. The corresponding recognition results are shown in Figure 5 . As mentioned earlier, for MPE training, the smoothing constant (i.e., the \u03c4 value of I-smoothing) is a very important factor and should be properly scaled on the basis of the ML training statistics [Povey 2004] . Owing to the different dynamic ranges of the phone accuracy values of the other three modified phone accuracy functions, the smoothing constant is suggested to be scaled accordingly when different training criteria (or phone accuracy functions) are being used. For example, the dynamic range of the phone accuracy values of MPFE training is apparently far larger than that of the standard MPE training, so the smoothing constant for the MPFE training should be empirically set to be larger than that of the standard MPE training.", "cite_spans": [ { "start": 499, "end": 511, "text": "[Povey 2004]", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 290, "end": 298, "text": "Figure 5", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Comparison of Proposed and Other Phone Accuracy Functions", "sec_num": "6.3" }, { "text": "As evidenced by Figure 6 , the recognition results of MD training are slightly worse than the standard MPE training for most of the training iterations. One possible reason for this is that the MD objective function is not well optimized, since the statistics for computing the KL divergence between any two HMM state-level probability distributions are fixed during the training process. Similar observations were also made in [Povey et al. 2007] . Furthermore, the corresponding results of the MPFE and sMBR training are also worse than those of the standard MPE training, which could be analyzed as follows. The statistics phone arc over-weighted even though its corresponding posterior probability is low. In contrast, the performance of our proposed method (MSFA) outperforms standard MPE training, as well as the other three modifications. This is because MSFA has a similar dynamic range of phone accuracy values to that of the standard MPE training, and all types of recognition errors (insertion, substitution, and deletion) are properly considered during the training process, unlike in standard MPE training. Actually, if the penalty \u03c1 is set to zero, MFA and MSFA are quite analogous to MPFE. However, the phone accuracy values of MFA and MSFA are further normalized by the frame number of a phone arc and the Sigmoid function, respectively.", "cite_spans": [ { "start": 428, "end": 447, "text": "[Povey et al. 2007]", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 6", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Comparison of Proposed and Other Phone Accuracy Functions", "sec_num": "6.3" }, { "text": "Moreover, we evaluated the effectiveness of our proposed frame-level normalized entropy-based training data selection approaches for MPE training. The best recognition results for the two variants, i.e., the hard (HS) and soft (SS) versions of frame-level data selection, are shown in Table 2 (MPE+HS and MPE+SS, respectively). The corresponding threshold value Thr for MPE+HS was empirically set to 0.05, while the weighting parameter \u03c9 for MPE+SS was empirically set to 1. It is worth mentioning that when threshold value Thr for MPE+HS is set to 0.05, the corresponding number of training frame samples used is about 4 million, which is 45.88% of the total training frame samples. Moreover, for MPE+HS, the frame samples being selected for the MPE training might be different from iteration to iteration, since the acoustic models will be updated after each training iteration, which will make the entropy value calculated for a given frame sample different from that calculated in the previous iteration. As evidenced by Table 2 , data selection (either MPE+HS or MPE+SS) will improve the performance of MPE when the acoustic models are trained at the lower iterations, and achieve comparable results to that of MPE trained at higher iterations. This means that data selection can help reduce the time consumed in training but retain the same performance. However, when the acoustic models of the frame-level data selection method are trained at higher iterations (e.g., 9 and 10 iterations), the corresponding performance, especially for MPE+HS, will become slightly worse than the standard MPE training. One possible reason for this is that the normalized entropy value and the amount of data selected by the hard-version data selection method (MPE+HS) would decrease through the training iterations, which has the side effect of making the training to some extent suffer from the data sparseness problem that makes the acoustic models over-trained. Therefore, one of our future research directions is to study the analysis of such an effect in more detail and try to dynamically adjust the selection threshold value through the iterations.", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 292, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 1025, "end": 1032, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments on Data Selection Approaches", "sec_num": "6.4" }, { "text": "On the other hand, we also apply random frame-level training sample selection to the MPE training, which randomly selects about 45% of the frame-level training samples for the MPE training at each training iteration, and the corresponding results are depicted in Table 2 (MPE+Random). The selecting capacity of our proposed frame-level data selection method can be verified again by comparison with random selection. The above results indeed justify our postulation that, with proper integration of data selection into the acoustic model training process, we can make the discriminative training algorithms focus much more on the useful training samples to achieve a better discrimination capability on the new test set.", "cite_spans": [], "ref_spans": [ { "start": 263, "end": 270, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments on Data Selection Approaches", "sec_num": "6.4" }, { "text": "Finally, we attempt to combine our proposed frame-level accuracy function and frame-level data selection. The two frame-level training data selection approaches, i.e., HS and SS, respectively, are integrated with the MSFA training. The corresponding results are shown in Table 3 . Actually, the data selection approaches are simply based on the entropy information of the Gaussian posterior probabilities of phone arcs, without taking any phone accuracy information into consideration. Thus, such a combination can be viewed as a loosely coupled approach, which to some extent would make the effect of the combination less pronounced. As can be seen from Table 3 , HS can considerably boost the performance of the MSFA training at lower training iterations, while SS only demonstrates marginal improvement. We also investigate the combination of HS and SS for the MSFA training, which is achieved using SS to emphasize or deemphasize the training samples selected by HS. Such a combination also can provide additional performance gains (at lower training iterations) over that obtained by using either HS or SS alone. ", "cite_spans": [], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 3", "ref_id": "TABREF7" }, { "start": 655, "end": 662, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Experiments on Combination of Frame-level Accuracy Function and Data Selection", "sec_num": "6.5" }, { "text": "In this paper, we have the explored the use of frame-level information for improved MPE training of acoustic models for Mandarin broadcast news recognition. A new phone accuracy function directly based on the frame-level accuracy has been presented. Moreover, a novel data selection approach using the normalized frame-level entropy of Gaussian posterior probabilities has been proposed as well. Promising and encouraging results on the recognition of Mandarin broadcast news speech were demonstrated. More in-depth investigation of the proposed training data selection, as well as its integration with other discriminative acoustic model training algorithms, is also currently being undertaken. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An Overview of Decoding Techniques for Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "X", "middle": [ "L" ], "last": "Aubert", "suffix": "" } ], "year": 2002, "venue": "Computer Speech and Language", "volume": "16", "issue": "", "pages": "89--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aubert, X. L., \"An Overview of Decoding Techniques for Large Vocabulary Continuous Speech Recognition,\" Computer Speech and Language, Vol.16, 2002, pp. 89-114.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Maximum Mutual Information Estimation of Hidden Markov Model Parameters for Speech Recognition", "authors": [ { "first": "L", "middle": [ "R" ], "last": "Bahl", "suffix": "" }, { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "De Souza", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1986, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing", "volume": "", "issue": "", "pages": "49--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bahl, L. R., P. F. Brown, P. V. de Souza, and R. L. Mercer, \"Maximum Mutual Information Estimation of Hidden Markov Model Parameters for Speech Recognition,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, 1986, Tokyo, Japan, pp. 49-52.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Lightly Supervised and Data-Driven Approaches to Mandarin Broadcast News Transcription", "authors": [ { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Kuo", "suffix": "" }, { "first": "W", "middle": [ "H" ], "last": "Tsai", "suffix": "" } ], "year": 2005, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "10", "issue": "1", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, B., J.W. Kuo, and W.H. Tsai, \"Lightly Supervised and Data-Driven Approaches to Mandarin Broadcast News Transcription,\" International Journal of Computational Linguistics and Chinese Language Processing, 10(1), 2005, pp. 1-18.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Minimum Divergence Based Discriminative Training", "authors": [ { "first": "J", "middle": [], "last": "Du", "suffix": "" }, { "first": "P", "middle": [], "last": "Liu", "suffix": "" }, { "first": "F", "middle": [ "K" ], "last": "Soong", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Zhou", "suffix": "" }, { "first": "R", "middle": [ "H" ], "last": "Wang", "suffix": "" } ], "year": 2006, "venue": "Proc. Int. Conf. Spoken Language Processing", "volume": "", "issue": "", "pages": "2410--2413", "other_ids": {}, "num": null, "urls": [], "raw_text": "Du, J., P. Liu, F. K. Soong, J. L. Zhou, and R. H. Wang, \"Minimum Divergence Based Discriminative Training\", in Proc. Int. Conf. Spoken Language Processing, 2006, Pittsburgh, USA, pp. 2410-2413.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hypothesis Spaces for Minimum Bayes Risk Training in Large Vocabulary Speech Recognition", "authors": [ { "first": "M", "middle": [], "last": "Gibson", "suffix": "" }, { "first": "T", "middle": [], "last": "Hain", "suffix": "" } ], "year": 2006, "venue": "Proc. Int. Conf. Spoken Language Processing", "volume": "", "issue": "", "pages": "2406--2409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, M., and T. Hain, \"Hypothesis Spaces for Minimum Bayes Risk Training in Large Vocabulary Speech Recognition\", in Proc. Int. Conf. Spoken Language Processing, 2006, Pittsburgh, USA , pp. 2406-2409.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Some Statistical Issues in the Comparison of Speech Recognition Algorithms", "authors": [ { "first": "L", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "S", "middle": [], "last": "Cox", "suffix": "" } ], "year": 1989, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing", "volume": "", "issue": "", "pages": "532--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gillick, L., and S. Cox, \"Some Statistical Issues in the Comparison of Speech Recognition Algorithms,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, 1989, Glasgow, UK, pp. 532-535.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An Efficient Image Similarity Measure based on Approximations of KL-Divergence between Two Gaussian Mixtures", "authors": [ { "first": "J", "middle": [], "last": "Goldberger", "suffix": "" } ], "year": 2003, "venue": "Proc. International Conference on Computer Vision", "volume": "", "issue": "", "pages": "370--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goldberger, J., \"An Efficient Image Similarity Measure based on Approximations of KL-Divergence between Two Gaussian Mixtures\", in Proc. International Conference on Computer Vision, 2003 , Nice, France, pp. 370-377.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Generalization of the Baum Algorithm to Rational Objective Functions", "authors": [ { "first": "P", "middle": [ "S" ], "last": "Gopalakrishnan", "suffix": "" }, { "first": "D", "middle": [], "last": "Kanevsky", "suffix": "" }, { "first": "A", "middle": [], "last": "Nadas", "suffix": "" }, { "first": "D", "middle": [], "last": "Nahamoo", "suffix": "" } ], "year": 1989, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing", "volume": "", "issue": "", "pages": "631--634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gopalakrishnan, P.S., D. Kanevsky, A. Nadas, and D. Nahamoo, \"A Generalization of the Baum Algorithm to Rational Objective Functions,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, 1989, Glasgow, UK, pp. 631-634.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Minimum Exact Word Error Training", "authors": [ { "first": "G", "middle": [], "last": "Heigold", "suffix": "" }, { "first": "W", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "R", "middle": [], "last": "Schluter", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "Proc. IEEE workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "186--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heigold, G., W. Macherey, R. Schluter, and H. Ney, \"Minimum Exact Word Error Training,\" in Proc. IEEE workshop on Automatic Speech Recognition and Understanding, 2005, Cancun, Mexico, pp. 186-190.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Large Margin Hidden Markov Models for Speech Recognition", "authors": [ { "first": "H", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "X", "middle": [], "last": "Li", "suffix": "" }, { "first": "C", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2006, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "14", "issue": "5", "pages": "1584--1595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang, H., X. Li, and C. Liu, \"Large Margin Hidden Markov Models for Speech Recognition,\" IEEE Transactions on Audio, Speech, and Language Processing, 14(5), 2006, pp. 1584-1595.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Investigation of Silicon-Auditory Models and Generalization of Linear Discriminant Analysis for Improved Speech Recognition", "authors": [ { "first": "N", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kumar, N., \"Investigation of Silicon-Auditory Models and Generalization of Linear Discriminant Analysis for Improved Speech Recognition,\" Ph.D. Thesis, John Hopkins University, 1997.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An Empirical Study of Word Error Minimization Approaches for Mandarin Large Vocabulary Speech Recognition", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Kuo", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Liu", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2006, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "11", "issue": "3", "pages": "201--222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuo, J.W., S. H. Liu, H.M. Wang, and B. Chen, \"An Empirical Study of Word Error Minimization Approaches for Mandarin Large Vocabulary Speech Recognition,\" International Journal of Computational Linguistics and Chinese Language Processing, 11(3), 2006, pp. 201-222.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Soft Margin Estimation of Hidden Markov Model Parameters", "authors": [ { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "M", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 2006, "venue": "Proc. Int. Conf. Spoken Language Processing", "volume": "", "issue": "", "pages": "2422--2425", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, J., M. Yuan, and C. H. Lee, \"Soft Margin Estimation of Hidden Markov Model Parameters,\" in Proc. Int. Conf. Spoken Language Processing, 2006, Pittsburgh, USA, pp. 2422-2425.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improved MPE Based Discriminative Training of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Liu", "suffix": "" }, { "first": "F", "middle": [ "H" ], "last": "Chu", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2007, "venue": "Proc. ROCLING XIX: Conference on Computational Linguistics and Speech Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, S.H., F.H. Chu, and B. Chen, \"Improved MPE Based Discriminative Training of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition,\" in Proc. ROCLING XIX: Conference on Computational Linguistics and Speech Processing, 2007a.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Investigating Data Selection for Minimum Phone Error Training of Acoustic Models", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Liu", "suffix": "" }, { "first": "F", "middle": [ "H" ], "last": "Chu", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Lin", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2007, "venue": "Proc. IEEE International Conference on Multimedia & Expo", "volume": "", "issue": "", "pages": "348--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, S.H., F.H. Chu, S.H. Lin, and B. Chen, \"Investigating Data Selection for Minimum Phone Error Training of Acoustic Models,\" in Proc. IEEE International Conference on Multimedia & Expo, 2007b, Beijing, China, pp. 348-351.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improved Minimum Phone Error based Discriminative Training of 361", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Improved Minimum Phone Error based Discriminative Training of 361", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Spectral Entropy Feature in Full-Combination Multi-Stream for Robust ASR", "authors": [ { "first": "H", "middle": [], "last": "Misra", "suffix": "" }, { "first": "H", "middle": [], "last": "Bourlard", "suffix": "" } ], "year": 2005, "venue": "Proc. European Conf. Speech Communication and Technology", "volume": "", "issue": "", "pages": "2633--2636", "other_ids": {}, "num": null, "urls": [], "raw_text": "Misra, H., and H. Bourlard, \"Spectral Entropy Feature in Full-Combination Multi-Stream for Robust ASR,\" in Proc. European Conf. Speech Communication and Technology, 2005, Lisbon, Portugal, pp. 2633-2636.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Word Graph Algorithm for Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "S", "middle": [], "last": "Ortmanns", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "X", "middle": [], "last": "Aubert", "suffix": "" } ], "year": 1997, "venue": "Computer Speech and Language", "volume": "11", "issue": "", "pages": "43--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ortmanns, S., H. Ney, and X. Aubert, \"A Word Graph Algorithm for Large Vocabulary Continuous Speech Recognition,\" Computer Speech and Language, 11, 1997, pp. 43-72.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Discriminative Training for Large Vocabulary Speech Recognition", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" } ], "year": 2004, "venue": "Ph.D Dissertation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D, \"Discriminative Training for Large Vocabulary Speech Recognition,\" Ph.D Dissertation, Peterhouse, University of Cambridge, July 2004.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Evaluation of Proposed Modifications to MPE for Large Scale Discriminative Training", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "B", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2007, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing", "volume": "", "issue": "", "pages": "321--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D., and B. Kingsbury, \"Evaluation of Proposed Modifications to MPE for Large Scale Discriminative Training,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, 2007, Hawaii, USA, pp. 321-324.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Minimum Phone Error and I-smoothing for Improved Discriminative Training", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 2002, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing", "volume": "", "issue": "", "pages": "105--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D., and P. C. Woodland, \"Minimum Phone Error and I-smoothing for Improved Discriminative Training,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, 2002, Florida, USA, pp. 105-108.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", "authors": [ { "first": "L", "middle": [], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the IEEE", "volume": "77", "issue": "2", "pages": "257--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rabiner, L., \"A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,\" Proceedings of the IEEE, 77(2), 1989, pp. 257-286.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Maximum likelihood discriminant feature spaces", "authors": [ { "first": "G", "middle": [], "last": "Saon", "suffix": "" }, { "first": "M", "middle": [], "last": "Padmanabhan", "suffix": "" }, { "first": "R", "middle": [], "last": "Gopinath", "suffix": "" }, { "first": "S", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2000, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing", "volume": "", "issue": "", "pages": "1129--1132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saon, G., M. Padmanabhan, R. Gopinath, and S. Chen, \"Maximum likelihood discriminant feature spaces,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, 2000, Istanbul, Turkey, pp. 1129-1132.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Data-Driven Approach to Designing Compound Words for Continuous Speech Recognition", "authors": [ { "first": "G", "middle": [], "last": "Saon", "suffix": "" }, { "first": "M", "middle": [], "last": "Padmanabhan", "suffix": "" } ], "year": 2001, "venue": "IEEE Trans. on Speech And Audio Processing", "volume": "9", "issue": "4", "pages": "327--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saon, G., and M. Padmanabhan, \"Data-Driven Approach to Designing Compound Words for Continuous Speech Recognition,\" IEEE Trans. on Speech And Audio Processing, 9(4), 2001, pp. 327-332.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SRI language Modeling Toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A., SRI language Modeling Toolkit, version 1.3.3, http://www.speech.sri.com/projects/srilm/, 2000.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "MATBN: A Mandarin Chinese Broadcast News Corpus", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Kuo", "suffix": "" }, { "first": "S", "middle": [ "S" ], "last": "Cheng", "suffix": "" } ], "year": 2005, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "10", "issue": "2", "pages": "219--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, H.M., B. Chen, J.W. Kuo, and S.S. Cheng, \"MATBN: A Mandarin Chinese Broadcast News Corpus,\" International Journal of Computational Linguistics and Chinese Language Processing, 10(2), 2005, pp. 219-236.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Explicit Word Error Minimization Using Word Hypothesis Posterior Probabilities", "authors": [ { "first": "F", "middle": [], "last": "Wessel", "suffix": "" }, { "first": "R", "middle": [], "last": "Schluter", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2001, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing", "volume": "", "issue": "", "pages": "33--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wessel, F., R. Schluter, and H. Ney, \"Explicit Word Error Minimization Using Word Hypothesis Posterior Probabilities,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, 2001, Salt Lake City, USA, pp. 33-36.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Improved Discriminative Training using Phone Lattices", "authors": [ { "first": "J", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2005, "venue": "Proc. European Conf. Speech Communication and Technology", "volume": "", "issue": "", "pages": "2125--2128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng, J., and A. Stolcke, \"Improved Discriminative Training using Phone Lattices,\" in Proc. European Conf. Speech Communication and Technology, 2005, Lisbon, Portugal, pp. 2125-2128.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Minimum Phone Error based Discriminative Training of 345 Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "average phone accuracy over all hypothesized word sequences in the word lattice; k q c is the expected phone accuracy over all hypothesized word sequences containing a phone arc q ; ( ) t o d is the observation vector component at frame t ; q s and q e are the start and end times of phone arc q ; k q \u03b3 the posterior probability for phone arc q of utterance k ; ) (t k qm \u03b3 is the posterior probability for mixture component m of phone arc q of utterance k at frame t ; training statistics for mixture component m of phone arc q , the mean and variance estimated in the previous iteration; and D is a constant used to ensure positive variance values. On the other hand, the calculation of" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "An illustration of the frame-level accuracy. The shaded box indicates where the frame-level errors occur. Improved Minimum Phone Error based Discriminative Training of 349 Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition" }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "A hypothetical example of binary classification illustrating the relationship between the decision boundary and the normalized entropy.Improved Minimum Phone Error based Discriminative Training of 351 Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition" }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "A plot of the relationship between the normalized entropy and the number of training speech frame samples." }, "FIGREF5": { "type_str": "figure", "num": null, "uris": null, "text": "Minimum Phone Error based Discriminative Training of 353 Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition phone accuracy of the word sequence hypotheses for the MPE training." }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "CER results (%) of two new phone accuracy functions in comparison with the standard MPE training." }, "FIGREF7": { "type_str": "figure", "num": null, "uris": null, "text": "on two parts (cf. Eq. (8)). One is the posterior probability k q \u03b3 of a phone arc q , while the other is the difference between the expected phone accuracy k q c over all hypothesized phone sequences containing q and the average phone accuracy k avg c over all hypothesized word sequences in the word lattice (i.e., k k q avg c c \u2212). However, due to the larger dynamic range of phone accuracy values for the MPFE and the sMBR training, the resulting value of or negative) would make the frame-level statistics of a" }, "FIGREF8": { "type_str": "figure", "num": null, "uris": null, "text": "CER results (%) of the MPE training and various modifications using different phone accuracy functions." }, "TABREF0": { "html": null, "text": "to update the mean hmd", "type_str": "table", "num": null, "content": "
\u03bcand variance\u03c32 hmdfor each
dimension d of a Gaussian mixture component m of a multi-state (or single-state)HMM
h using the following equations:
\u03bchm d=\u03b8num hm d( ) num qm O \u03b8 \u03b3 \u2212( ) den hm d den qm O \u03b3 \u2212 + + D D\u03bchm d,(2)
2 hmdnum hmd2 ( )2 ( ) num den hmd den hm hm(2 hmd2 hmd)2 hmd,
" }, "TABREF3": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
\u96b1\u85cf (influence) \u5f71\u97ff \u5f71\u97ff \u5f71\u97ff \u96b1\u85cf (hide) t 6 (epidemic situation) \u4e00\u7fa4 \u75ab\u60c5 \u75ab\u60c5 (flying moth) \u98db\u86fe \u80ba\u708e (pneumonic) \u4e00\u7fa4 (a flock of) \u75ab\u60c5 \u75ab\u60c5 t 1 t 3 t 2 \u80ba\u708e SIL (silence) t 3 t 4 t 5 t 6 t 7 t 8 Figure 1. Improved Minimum Phone Error based Discriminative Training of \u751f\u547d \u751f\u547d (life) \u751f\u6d3b SIL \u751f\u6d3b (living) SIL T 0 t 9 t 11 t 12 t 10 Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition347
" }, "TABREF5": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
IterationsMPEMPE+HSMPE+SSMPE+Random
122.8222.6322.8423.02
222.4422.0522.4022.62
322.2821.6022.2122.22
421.7921.4021.6522.16
521.4821.1921.3421.76
621.2420.9221.3321.66
721.1020.9021.2921.74
821.0620.7921.0021.62
920.9720.9721.0221.78
1020.7720.8020.9421.84
" }, "TABREF6": { "html": null, "text": "Improved Minimum Phone Error based Discriminative Training of 359 Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition", "type_str": "table", "num": null, "content": "" }, "TABREF7": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
IterationsMPEMSFAMSFA+HSMSFA+SSMSFA+HS+SS
122.8222.8822.4622.7522.53
222.4422.3721.8722.2521.72
322.2822.0621.4021.8321.45
421.7921.5221.3821.4521.38
521.4821.2321.0821.2721.03
621.2421.0521.0320.9420.90
721.1020.8921.0220.6521.14
821.0620.5021.1520.7821.14
920.9720.5820.8620.5621.07
1020.7720.4621.4320.8621.37
" } } } }