Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O08-2002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:02:35.475670Z"
},
"title": "One-Sample Speech Recognition of Mandarin Monosyllables using Unsupervised Learning",
"authors": [
{
"first": "Tze",
"middle": [
"Fen"
],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ming Dao University",
"location": {
"addrLine": "Chang-Hua",
"country": "Taiwan, ROC"
}
},
"email": "tfli@mdu.edu.tw"
},
{
"first": "Shui-Ching",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tze",
"middle": [],
"last": "Fen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ming Dao University",
"location": {
"addrLine": "369 Wen-Hua Road, Pee-Tow, Chang-Hua (52345)",
"country": "Taiwan, ROC"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the speech recognition, a mandarin syllable wave is compressed into a matrix of linear predict coding cepstra (LPCC), i.e., a matrix of LPCC represents a mandarin syllable. We use the Bayes decision rule on the matrix to identify a mandarin syllable. Suppose that there are K different mandarin syllables, i.e., K classes. In the pattern classification problem, it is known that the Bayes decision rule, which separates K classes, gives a minimum probability of misclassification. In this study, a set of unknown syllables is used to learn all unknown parameters (means and variances) for each class. At the same time, in each class, we need one known sample (syllable) to identify its own means and variances among K classes. Finally, the Bayes decision rule classifies the set of unknown syllables and input unknown syllables. It is an one-sample speech recognition. This classifier can adapt itself to a better decision rule by making use of new unknown input syllables while the recognition system is put in use. In the speech experiment using unsupervised learning to find the unknown parameters, the digit recognition rate is improved by 22%.",
"pdf_parse": {
"paper_id": "O08-2002",
"_pdf_hash": "",
"abstract": [
{
"text": "In the speech recognition, a mandarin syllable wave is compressed into a matrix of linear predict coding cepstra (LPCC), i.e., a matrix of LPCC represents a mandarin syllable. We use the Bayes decision rule on the matrix to identify a mandarin syllable. Suppose that there are K different mandarin syllables, i.e., K classes. In the pattern classification problem, it is known that the Bayes decision rule, which separates K classes, gives a minimum probability of misclassification. In this study, a set of unknown syllables is used to learn all unknown parameters (means and variances) for each class. At the same time, in each class, we need one known sample (syllable) to identify its own means and variances among K classes. Finally, the Bayes decision rule classifies the set of unknown syllables and input unknown syllables. It is an one-sample speech recognition. This classifier can adapt itself to a better decision rule by making use of new unknown input syllables while the recognition system is put in use. In the speech experiment using unsupervised learning to find the unknown parameters, the digit recognition rate is improved by 22%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A speech recognition system in general consists of feature extractor and classification of an utterance [1] [2] [3] [4] [5] . The function of feature extractor is to extract the important features from the speech waveform of an input speech syllable. Let x denote the measurement of the significant, characterizing features. This x will be called a feature value. The function performed by a classifier is to assign each input syllable to one of several possible syllable classes. The decision is made on the basis of feature measurements supplied by the feature extractor in a recognition system. Since the measurement x of a pattern may have a variation or noise, a classifier may classify an input syllable to a wrong class. The classification criterion is usually the minimum probability of misclassification [1] .",
"cite_spans": [
{
"start": 104,
"end": 107,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 108,
"end": 111,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 112,
"end": 115,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 116,
"end": 119,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 120,
"end": 123,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 813,
"end": 816,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this study, a statistical classifier, called an empirical Bayes (EB) decision rule, is applied to solving K-class pattern problems: all parameters of the conditional density function f (x | \u03c9) are unknown, where \u03c9 denotes one of K classes, and the prior probability of each class is unknown. A set of n unidentified input mandarin monosyllables is used to establish the decision rule, which is used to separate K classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "After learning the unknown parameters, the EB decision rule will make the probability of misclassification arbitrarily close to that of the Bayes rule when the number of unidentified patterns increases. The problem of learning from unidentified samples (called unsupervised learning or learning without a teacher) presents both theoretical and practical problems [6] [7] [8] . In fact, without any prior assumption, successful unsupervised learning is indeed unlikely.",
"cite_spans": [
{
"start": 363,
"end": 366,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 367,
"end": 370,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 371,
"end": 374,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our speech recognition using unsupervised learning, a syllable is denoted by a matrix of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Since the matrix has 8x12 feature values, we use a dynamic processing algorithm to estimate the 96 feature parameters (means and variances). Our EB classifier, after unsupervised learning of the unknown parameters, can adapt itself to a better and more accurate decision rule by making use of the unidentified input syllables after the speech system is put in use. The results of a digit speech experiment are given to show the recognition rates provided by the decision rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Let X be the present observation which belongs to one of K classes c i , i = 1, 2, \u2022 \u2022 \u2022 , K. Consider the decision problem consisting of determining whether X belongs to c i . Let f (x | \u03c9) be the conditional density function of X given \u03c9, where \u03c9 denotes one of K classes and let \u03b8 i , i = 1, 2, \u2022 \u2022 \u2022 , K, be the prior probability of c i with K i=1 \u03b8 i = 1. In this study, both the parameters of f (x | \u03c9) and the \u03b8 i are unknown. Let d be a decision rule. A simple loss model is used such that the loss is 1 when d makes a wrong decision and the loss is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "0 when d makes a correct decision. Let \u03b8 = {(\u03b8 1 , \u03b8 2 , \u2022 \u2022 \u2022 , \u03b8 K ); \u03b8 i > 0, K i=1 \u03b8 i = 1}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "be the prior probabilities. Let R(\u03b8, d) denote the risk function (the probability of misclassification) of d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "Let \u0393 i , i = 1, 2, \u2022 \u2022 \u2022 , K, be K regions separated by d in the domain of X, i.e., d decides c i when X \u2208 \u0393 i . Let \u03be i denote all parameters of the conditional density function in class c i , i = 1, ..., K. Then R(\u03b8, d) = K i=1 \u0393 c i \u03b8 i f (x | \u03be i )dx (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "where \u0393 c i is the complement of \u0393 i . Let D be the family of all decision rules which separate K pattern classes. For \u03b8 fixed, let the minimum probability of misclassification be denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(\u03b8) = inf d\u2208D R(\u03b8, d).",
"eq_num": "(2)"
}
],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "A decision rule d \u03b8 which satisfies (2) is called the Bayes decision rule with respect to the prior probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "vector \u03b8 = (\u03b8 1 , \u03b8 2 , \u2022 \u2022 \u2022 , \u03b8 K )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "and given by Ref. [1] ",
"cite_spans": [
{
"start": 18,
"end": 21,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d \u03b8 (x) = c i if \u03b8 i f (x | \u03be i ) > \u03b8 j f (x | \u03be j ) f or all j = i.",
"eq_num": "(3)"
}
],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "In the empirical Bayes (EB) decision problem [9] , the past observations (\u03c9 m , X m ),",
"cite_spans": [
{
"start": 45,
"end": 48,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "m = 1, 2, \u2022 \u2022 \u2022 , n,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "and the present observation (\u03c9, X) are i.i.d., and all X m are drawn from the same conditional densities, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "f (x m | \u03c9 m ) with p(\u03c9 m = c i ) = \u03b8 i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "The EB decision problem is to establish a decision rule based on the set of past observations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "X n = (X 1 , X 2 , \u2022 \u2022 \u2022 , X n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": ". In a pattern recognition system with unsupervised learning, X n is a set of unidentified input patterns. The decision rule can be constructed using X n to select a decision rule t n (X n ) which determines whether the present observation X belongs to c i . Let \u03be = (\u03be 1 , ..., \u03be K ). Then the risk of t n , conditioned on X n = x n , is R(\u03b8, t n (x n )) \u2265 R(\u03b8) and the overall risk of t n is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R n (\u03b8, t n ) = R(\u03b8, t n (x n )) n m=1 p(x m | \u03b8, \u03be) dx 1 \u2022 \u2022 \u2022 dx n",
"eq_num": "(4)"
}
],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "where p(x m | \u03b8, \u03be) is the marginal density of X m with respect to the prior distribution of classes, i.e., p(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "x m | \u03b8, \u03be) = K i=1 \u03b8 i f (x m | \u03be i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "The EB approach has been recently used in many areas including classification [10, 11] , sequential estimation [12] , reliability [13] [14] [15] , multivariate analysis [16, 17] , linear models [18, 19] , nonparametric estimation [20, 21] and some other estimation problems [22, 23] . Let",
"cite_spans": [
{
"start": 78,
"end": 82,
"text": "[10,",
"ref_id": "BIBREF9"
},
{
"start": 83,
"end": 86,
"text": "11]",
"ref_id": "BIBREF10"
},
{
"start": 111,
"end": 115,
"text": "[12]",
"ref_id": "BIBREF11"
},
{
"start": 130,
"end": 134,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 135,
"end": 139,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 140,
"end": 144,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 169,
"end": 173,
"text": "[16,",
"ref_id": "BIBREF15"
},
{
"start": 174,
"end": 177,
"text": "17]",
"ref_id": "BIBREF16"
},
{
"start": 194,
"end": 198,
"text": "[18,",
"ref_id": "BIBREF17"
},
{
"start": 199,
"end": 202,
"text": "19]",
"ref_id": "BIBREF18"
},
{
"start": 230,
"end": 234,
"text": "[20,",
"ref_id": "BIBREF19"
},
{
"start": 235,
"end": 238,
"text": "21]",
"ref_id": "BIBREF20"
},
{
"start": 274,
"end": 278,
"text": "[22,",
"ref_id": "BIBREF21"
},
{
"start": 279,
"end": 282,
"text": "23]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S = {(\u03b8, \u03be); \u03b8 = (\u03b8 1 , ..., \u03b8 K ), \u03be = (\u03be 1 , ..., \u03be K )}",
"eq_num": "(5)"
}
],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "define a parameter space of prior probabilities \u03b8 i and parameters \u03be i representing the i-th class, i = 1, ..., K.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "Let P be a probability distribution on the parameter space S. In this study, we want to find an EB decision rule which minimizesR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n (P, t n ) = R n (\u03b8, t n )dP (\u03b8, \u03be).",
"eq_num": "(6)"
}
],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "Similar approaches to constructing EB decision rules can be found in the recent literature [11, 15, 24] . From",
"cite_spans": [
{
"start": 91,
"end": 95,
"text": "[11,",
"ref_id": "BIBREF10"
},
{
"start": 96,
"end": 99,
"text": "15,",
"ref_id": "BIBREF14"
},
{
"start": 100,
"end": 103,
"text": "24]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "(1) and (4), (6) can be written a\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "R n (P, t n ) = K i=1 \u0393 c i,n f (x | \u03be i )\u03b8 i n m=1 p(x m | \u03b8, \u03be)dP (\u03b8, \u03be) dx dx 1 \u2022 \u2022 \u2022 dx n (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "where, in the domain of X,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "\u0393 i,n , i = 1, 2, \u2022 \u2022 \u2022 , K, are K regions, separated by t n (X n ), i.e., t n (X n ) decides c i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "when X \u2208 \u0393 i,n and hence they depend on the past observations X n . The EB decision rule which minimizes (7) can be found in Ref [24] . Since the unsupervised learning in this study is based on the following two theorems given in Ref [24] , both theorems and their simple proofs are provided in this paper.",
"cite_spans": [
{
"start": 129,
"end": 133,
"text": "[24]",
"ref_id": "BIBREF23"
},
{
"start": 234,
"end": 238,
"text": "[24]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "Theorem 1 [24] . The EB decision rulet n with respect to P which minimizes the overall risk function 7is given byt",
"cite_spans": [
{
"start": 10,
"end": 14,
"text": "[24]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n (x n )(x) = c i if f (x | \u03be i ) \u03b8 i n m=1 p(x m | \u03b8, \u03be)dP (\u03b8, \u03be) > f (x | \u03be j ) \u03b8 j n m=1 p(x m | \u03b8, \u03be)dP (\u03b8, \u03be)",
"eq_num": "(8)"
}
],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "for all j = i, i.e., \u0393 i,n is defined by the definition of the inequality in (8) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "Proof. To minimize the overall risk 7is to minimize the integrand",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "K i=1 \u0393 c i,n f (x|\u03be i )\u03b8 i n m=1 p(x m |\u03b8, \u03be)dP (\u03b8, \u03be) dx",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "of (7) for each past observations x n . Let the past obervations x n be fixed and let i be fixed for i = 1, ..., k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Bayes Decision Rules for Classification",
"sec_num": "2."
},
{
"text": "g i (x) = f (x|\u03be i )\u03b8 i n m=1 p(x m |\u03b8, \u03be)dP (\u03b8, \u03be).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Then the integrand of (7) can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "K i=1 \u0393 c i,n g i (x)dx = \u0393 c i,n g i (x)dx + j =i [ g j (x)dx \u2212 \u0393j,n g j (x)dx] = j =i g j (x)dx + j =i \u0393j,n [g i (x) \u2212 g j (x)]dx (\u0393 c i,n = j =i \u0393 j,n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "which is minimum since \u0393 j,n \u2282 {x|g j (x) > g i (x)} for all j = i by the definition of \u0393 j,n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "In applications, we let the parameters \u03be i , i = 1, ..., K, be bounded by a finite numbers M i . Let \u03c1 > 0 and \u03b4 > 0. Consider the subset S 1 of the parameter space S defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S 1 ={(n 1 \u03c1, n 2 \u03c1, ..., n K \u03c1, n K+1 \u03b4, n K+2 \u03b4, ..., n 2K \u03b4); integer n i > 0, i = 1, ..., K, K i=1 n i \u03c1 = 1, |n i \u03b4| \u2264 M i , integer n i , i = K + 1, ..., 2K}",
"eq_num": "(9)"
}
],
"section": "Let",
"sec_num": null
},
{
"text": "where (n 1 \u03c1, ..., n K \u03c1) are prior probabilities and (n K+1 \u03b4, ..., n 2K \u03b4) are the parameters of K classes. In order to simplify the conditional density of (\u03b8, \u03be), let P be a uniform distribution on S 1 so that the conditional density can later be written as a recursive formula. The boundary for class i relative to another class j as separated by (8) can be represented by the equation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E[f (x | \u03be i )\u03b8 i | x n ] = E[f (x | \u03be j )\u03b8 j | x n ]",
"eq_num": "(10)"
}
],
"section": "Let",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E[f (x | \u03be i )\u03b8 i | x n ] is the conditional expectation of f (x | \u03be i )\u03b8 i given X n = x n with the conditional probability function of (\u03b8, \u03be) given X n = x n equal to h(\u03b8, \u03be | x n ) = n m=1 p(x m | \u03b8, \u03be) (\u03b8 \u03be )\u2208S1 n m=1 p(x m | \u03b8 , \u03be )",
"eq_num": "(11)"
}
],
"section": "Let",
"sec_num": null
},
{
"text": "The actual region for class i as determined by (8) is the intersection of the regions whose borders are given by (10) , relative to all other classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "The main result in Ref [24] is that the estimates E[\u03b8 i | X n ] converge almost sure (a.s.) to a point arbitrarily close to the true prior probability and E[\u03be i |X n ] will converge to a point arbitrarily close to the true parameter in the conditional density for the i-th class. Let \u03bb = (\u03b8 1 , ..., \u03b8 K , \u03be 1 , ..., \u03be K ) in the parameter space S. Let \u03bb o be the true parameter of \u03bb.",
"cite_spans": [
{
"start": 23,
"end": 27,
"text": "[24]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Lamma 1 (Kullback, 1973 [25] ). Let",
"cite_spans": [
{
"start": 8,
"end": 28,
"text": "(Kullback, 1973 [25]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "H(\u03bb o , \u03bb) = ln p(x|\u03bb)p(x|\u03bb o )dx.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Then Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "\u03bb = (\u03b8 , \u03be ) \u2208 S 1 such that H(\u03bb o , \u03bb ) = max \u03bb\u2208S1 H(\u03bb o , \u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Since S 1 has a finite number of points,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "H(\u03bb o , \u03bb ) \u2212 H(\u03bb o , \u03bb) \u2265 for some > 0 and for all \u03bb \u2208 S 1 . Since H(\u03bb o , \u03bb) is a smooth (differentiable)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "function of \u03bb \u2208 S, the maximum point \u03bb in S 1 is arbitrarily close to the true parameter \u03bb o in S if the increments \u03b4 and \u03c1 are small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Theorem 2 [24] . Let \u03bb o be the true parameter of \u03bb. Let \u03bb = (\u03b8, \u03be) in S. The conditional probability function h(\u03bb|x n ) given X n = x n in (11) has the following property: for each \u03bb \u2208 S 1 ,",
"cite_spans": [
{
"start": 10,
"end": 14,
"text": "[24]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "lim n\u2192\u221e h(\u03bb | x n ) = 0 if \u03bb = \u03bb = 1 if \u03bb = \u03bb",
"eq_num": "(12)"
}
],
"section": "Let",
"sec_num": null
},
{
"text": "and hence E[\u03bb | X n ] converges to \u03bb with probability 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Proof. H(\u03bb o , \u03bb) has an absolutely maximum value at \u03bb = \u03bb on S 1 . Let \u03bb \u2208 S 1 and \u03bb = \u03bb . Consider",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "1 n ln n m=1 p(X m |\u03bb) n m=1 p(X m |\u03bb ) = 1 n n m=1 ln p(X m |\u03bb) \u2212 1 n n m=1 ln p(X m |\u03bb )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "which converges almost sure to H(\u03bb o , \u03bb) \u2212 H(\u03bb o , \u03bb ) < \u2212 by a theorem (the strong law of large numbers, Wilks, (1962) [26] ), i.e., there exists a N > 0 such that for all n > N ,",
"cite_spans": [
{
"start": 121,
"end": 125,
"text": "[26]",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "1 n ln n m=1 p(X m |\u03bb) n m=1 p(X m |\u03bb ) < \u2212 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Hence, for all n > N , 1 n ln h(\u03bb|X n ) < \u2212 2 , i.e., for all n > N , ln h(\u03bb|X n ) < \u2212n 2 . This implies that lim n\u2192\u221e ln h(\u03bb|X n ) = \u2212\u221e and lim n\u2192\u221e h(\u03bb|X n ) = 0 for \u03bb = \u03bb almost sure. Obviousy, \u03bb\u2208S1 h(\u03bb|X n ) = 1 implies lim n\u2192\u221e h(\u03bb |X n ) = 1 almost sure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let",
"sec_num": null
},
{
"text": "Feature Extraction system which is recently used to approximate the nonlinear, time-varying system of the speech wave. The MFCC method uses the bank of filters scaled according to the Mel scale to smooth the spectrum, performing a processing that is similar to that executed by the human ear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "In the real world, all signals contain noise. In our speech recognition system, the speech data must contain noise. We propose two simple methods to eliminate noise. One way is to use the sample variance of a fixed number of sequential sampled points of a syllable wave to detect the real speech signal, i.e., the sampled points with small variance does not contain real speech signal. Another way is to compute the sum of the absolute values of differences of two consecutive sampled points in a fixed number of sequential speech sampled points, i.e., the speech data with small absolute value does not contain real speech signal. In our speech recognition experiments, the latter provides slightly faster and more accurate speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing Speech Signal",
"sec_num": "3.1."
},
{
"text": "For speech recognition, the most common features to be extracted from a speech signal are Mel-frequency cepstrum coefficient (MFCC) and linear predict coding cepstrum (LPCC). The MFCC was proved to be better than the LPCC for recognition [27] , but we have shown [28] that the LPCC has a slightly higher recognition rate. Since the MFCC has to compute the DFT and inverse DFT of a speech wave, the computational complexity is much heavier than that of the LPCC. The LPC coefficients can be easily obtained by Durbin's recursive procedure [2, 29, 30] and their cepstra can be quickly found by another recursive equations [2, 29, 30] . The LPCC can provide a robust, reliable and accurate method for estimating the parameters that characterize the linear and time-varying system like speech signal [2, 4, [29] [30] . Therefore, in this study, we use the LPCC as the feature of a mandarin syllable. The following is a brief discussion on the LPC method:",
"cite_spans": [
{
"start": 238,
"end": 242,
"text": "[27]",
"ref_id": "BIBREF26"
},
{
"start": 263,
"end": 267,
"text": "[28]",
"ref_id": "BIBREF27"
},
{
"start": 538,
"end": 541,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 542,
"end": 545,
"text": "29,",
"ref_id": "BIBREF28"
},
{
"start": 546,
"end": 549,
"text": "30]",
"ref_id": "BIBREF29"
},
{
"start": 620,
"end": 623,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 624,
"end": 627,
"text": "29,",
"ref_id": "BIBREF28"
},
{
"start": 628,
"end": 631,
"text": "30]",
"ref_id": "BIBREF29"
},
{
"start": 796,
"end": 799,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 800,
"end": 802,
"text": "4,",
"ref_id": "BIBREF3"
},
{
"start": 803,
"end": 807,
"text": "[29]",
"ref_id": "BIBREF28"
},
{
"start": 808,
"end": 812,
"text": "[30]",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "It is assumed [2] [3] [4] that the sampled speech wave s(n) can be linearly predicted from the past p samples of s(n). Let\u015d",
"cite_spans": [
{
"start": 14,
"end": 17,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 18,
"end": 21,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 22,
"end": 25,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(n) = p k=1 a k s(n \u2212 k)",
"eq_num": "(13)"
}
],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "and let E be the squared difference between s(n) and\u015d(n) over N samples of s(n), i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E = N \u22121 n=0 [s(n) \u2212\u015d(n)] 2 .",
"eq_num": "(14)"
}
],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "The unknown a k , k = 1, ...p, are called the LPC coefficients and can be solved by the least square method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "The most efficient method known for obtaining the LPC coefficients is Durbin's recursive procedure [31] .",
"cite_spans": [
{
"start": 99,
"end": 103,
"text": "[31]",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "Here in our speech experiment, p = 12, because the cepstra in the last few elements are almost zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Predict Coding Cepstrum (LPCC)",
"sec_num": "3.2."
},
{
"text": "Our feature extraction from LPCC is quite simple. Let x(k) = (x(k) 1 ,..., x(k) p ), k = 1, .., n, be the LPCC vector for the k-th frame of a speech wave in the sequence of n vectors. Normally, if a speaker does not intentionally elongate pronunciation, a mandarin syllable has 30-70 vectors of LPCC. After 50 vectors of LPCC, the sequence does not contain significant features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "Since an utterance of a syllable is composed two parts: stable part and feature part. In the feature part, the LPCC vectors have a dramatic change between two consecutive vectors, representing the unique characteristics of syllable utterance and in the stable part, the LPCC vectors do not change much and stay about the same. Even if the same speaker utters the same syllable, the duration of the stable part of the sequence of LPCC vectors changes every time with nonlinear expansion and contraction and hence the duration of the stable parts and the duration of the whole sequence of LPCC vectors are different every time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "Therefore, the duration of stable parts is contracted such that the compressed speech waveforms have about the same length of the sequence of LPCC vectors. Li [32] proposed several simple techniques to contract the stable parts of the sequence of vectors. We state one simple technique for contraction as follows:",
"cite_spans": [
{
"start": 159,
"end": 163,
"text": "[32]",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "Let x(k) = (x(k) 1 , ..., x(k) p ), k = 1, ..., n, be the k-th vector of a LPCC sequence with n vectors, which represents a mandarin syllable. Let the difference of two consecutive vectors be denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(k) = p i=1 |x(k) i \u2212 x(k \u2212 1) i |, k = 2, ..., n.",
"eq_num": "(15)"
}
],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "In order to accurately identify the syllable utterance, a compression process must first be performed to remove the stable and flat portion in the sequence of vectors. A LPCC vector x(k) is removed if its difference D(k) from the previous vector x(k \u2212 1) is too small. Let x (k), k = 1, ..., m(< n), be the new sequence of LPCC vectors after deletion. We think that the first part (about 40 vectors or less) of an utterance of a mandarin syllable contains main features which can most represent the syllable and the rest of the sequence contains the \"tail\" sound, which has a variable length. If a speaker intentionally elongates pronunciation of a syllable, the speaker only increases the tail part of the sequence and the length of the feature part stays about the same. We partition the feature part (the first 40 vectors of the new sequence) into 6 equal segments since the feature part of LPCC vectors has a dramatic change and partition the tail part into 2 equal segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "If the whole length of the new sequence is less than 40, we neglect the tail sound and partition the new sequence into 8 equal segments. The average value of the LPCC in each segment is used as a feature value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "Note that the average values of samples tend to have a normal distribution [26] . This compression produces",
"cite_spans": [
{
"start": 75,
"end": 79,
"text": "[26]",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "12x8 feature values for each mandarin syllable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.3."
},
{
"text": "Stochastic approximation [1, 2, 33, 34] is an iterative algorithm for random environments, which is used for parameter estimation in pattern recognition. Its convergence is guaranteed under very general circumstances.",
"cite_spans": [
{
"start": 25,
"end": 28,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 29,
"end": 31,
"text": "2,",
"ref_id": "BIBREF1"
},
{
"start": 32,
"end": 35,
"text": "33,",
"ref_id": "BIBREF32"
},
{
"start": 36,
"end": 39,
"text": "34]",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Approximation",
"sec_num": "4."
},
{
"text": "Essentially, a stochastic approximation procedure [1, 2, 33, 34] should satisfy: (1) the successive expression of the estimate of a parameter can be written as an estimate calculated from the old n patterns and the contribution of the new (n + 1)-st pattern and (2) the effect of the new pattern may diminish by using a decreasing sequence of coefficients. The best known of the stochastic approximation procedures are the Robbins-Monro procedure [1, 33, 34] and the Kiefer-Wolfowitz procedure [1, 34] .",
"cite_spans": [
{
"start": 50,
"end": 53,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 54,
"end": 56,
"text": "2,",
"ref_id": "BIBREF1"
},
{
"start": 57,
"end": 60,
"text": "33,",
"ref_id": "BIBREF32"
},
{
"start": 61,
"end": 64,
"text": "34]",
"ref_id": "BIBREF33"
},
{
"start": 447,
"end": 450,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 451,
"end": 454,
"text": "33,",
"ref_id": "BIBREF32"
},
{
"start": 455,
"end": 458,
"text": "34]",
"ref_id": "BIBREF33"
},
{
"start": 494,
"end": 497,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 498,
"end": 501,
"text": "34]",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Approximation",
"sec_num": "4."
},
{
"text": "For the unsupervised learning, (11) can be written in the recursive form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Approximation",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h(\u03bb|x n+1 ) = p(x n+1 |\u03bb)h(\u03bb|x n ) \u03bb \u2208S1 p(x n+1 |\u03bb )h(\u03bb |x n ) f or n = 0, 1, 2, ...",
"eq_num": "(16)"
}
],
"section": "Stochastic Approximation",
"sec_num": "4."
},
{
"text": "where h(\u03bb|x n ) = 1, if n = 0. Equ. (16) is different from the above two types of procedures. It does not have a regression function or an obvious decreasing sequence of coefficients, but it appears to be a weighted product of the estimates calculated from the old patterns and the contribution of the new pattern. In each step of evaluation, (16) multiplies a new probability factor with the old conditional probability h(\u03bb|x n ) based on the new pattern x n+1 . The convergence of (16) is guaranteed by Theorem 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Approximation",
"sec_num": "4."
},
{
"text": "As i.e., each syllable has an equal chance to be pronounced. Let \u03bb denote all parameters, i.e., Kx12x8 means and variances for K classes of syllables. Let \u03bb o be the true parameters. From Theorem 2 in Section 2, the conditional probability h(\u03bb|x n ) has the maximum probability at \u03bb = \u03bb o for large n, i.e., the numerator",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F (x n |\u03bb) = n m=1 p(x m |\u03bb)",
"eq_num": "(17)"
}
],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5."
},
{
"text": "is maximum at \u03bb = \u03bb o for large n, where x m , m = 1, ..., n, is the 12x8 matrix. Therefore, to search the true parameter \u03bb o by the recursive equation (16) is to find the MLE of \u03bb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5."
},
{
"text": "To find the MLE of unknown parameters is a complicated multi-parameter optimization problem. First one has to evaluate the likelihood function F on a coarse grid to locate roughly the global maximum and then apply a numerical method (Gauss method, Newton-Raphson or some gradient-search iterative algorithm).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5."
},
{
"text": "Hence the direct approach tends to be computationally complex and time consuming. Here, we use a simple dynamic processing algorithm to find the MLE, which is similar to an EM [35, 36] algorithm.",
"cite_spans": [
{
"start": 176,
"end": 180,
"text": "[35,",
"ref_id": "BIBREF34"
},
{
"start": 181,
"end": 184,
"text": "36]",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5."
},
{
"text": "A syllable is denoted by a matrix of feature values X ij , i = 1, ..., 12, j = 1, ..., 8. For simplicity, we assume that the 12x8 random variables X ij are stochastically independent (as a matter of fact, they are not independent). The marginal density function of an unidentified syllable X m with its matrix denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Log Likelihood Function",
"sec_num": "5.1."
},
{
"text": "x m = (x m ij ) in (17) can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Log Likelihood Function",
"sec_num": "5.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(x m |\u03bb) = K k=1 \u03b8 k ij f (x m ij |\u00b5 ijk , \u03c3 ijk )",
"eq_num": "(18)"
}
],
"section": "The Log Likelihood Function",
"sec_num": "5.1."
},
{
"text": "where f (x m ij |\u00b5 ijk , \u03c3 ijk ) is the conditional normal density of the feature value X m ij in the matrix if the syllable X m = (X m ij ) belongs to the k-th class. The log likelihood function can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Log Likelihood Function",
"sec_num": "5.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ln F (x n |\u03bb) = n m=1 ln K k=1 \u03b8 k 12 i=1 8 j=1 1 \u221a 2\u03c0\u03c3 kij e \u2212 1 2 ( x m ij \u2212\u00b5 kij \u03c3 kij ) 2 .",
"eq_num": "(19)"
}
],
"section": "The Log Likelihood Function",
"sec_num": "5.1."
},
{
"text": "From the log likelihood function (19) , we present a simple dynamic processing algorithm to find the MLE of unknown parameters \u00b5 ijk and \u03c3 ijk . Our algorithm is an EM algorithm [35, 36] , more and less like the Viterbi algorithm [2] [3] [4] . We state the our dynamic processing algorithm as follows:",
"cite_spans": [
{
"start": 178,
"end": 182,
"text": "[35,",
"ref_id": "BIBREF34"
},
{
"start": 183,
"end": 186,
"text": "36]",
"ref_id": "BIBREF35"
},
{
"start": 230,
"end": 233,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 234,
"end": 237,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 238,
"end": 241,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5.2."
},
{
"text": "1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5.2."
},
{
"text": "In the matrix, pick up an initial value of (\u00b5 kij , \u03c3 kij ), k = 1, ..., K, for K classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dynamic Processing Algorithm",
"sec_num": "5.2."
},
{
"text": "For k = 1 and for each i = 1, ..., 12 and j = 1, ..., 8, pick up a point (\u03bc 1ij ,\u03c3 1ij ) such that ln F in (19) is maximum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Continue step 2 for k = 2, ..., K.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "If (19) continues increasing, go to step 2, otherwise, stop the dynamic processing and the final estimates (\u03bc kij ,\u03c3 kij ) are the MLE of (\u00b5 kij , \u03c3 kij ) for all K classes and are saved in a database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "For each element (i, j) in the matrix, we have found the MLE (\u03bc kij ,\u03c3 kij ) for each syllable. There are totally K matrices of MLE representing K different syllables, but we do not know which matrix of MLE belongs to the syllable c i , i = 1, ..., K. We have to use one known sample from each syllable to identify its own matrix of MLE. In this paper, we simply use the distance to select a matrix of MLE among K matrices for the known sample.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding the Means and Variances for each Syllable by a Known Sample",
"sec_num": "5.3."
},
{
"text": "After each syllable obtains its means and variances which are identified by a known sample of the syllable, the Bayes decision rule (3) with the estimated means and variances (MLE) classifies the set of all unidentified syllables. After simplification [32] , the Bayes decision rule (3) can be reduced to",
"cite_spans": [
{
"start": 252,
"end": 256,
"text": "[32]",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification by the Bayes Decision Rule",
"sec_num": "5.4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l(c k ) = ij ln(\u03c3 kij ) + 1 2 ij ( x ij \u2212\u03bc ki\u0135 \u03c3 kij ) 2",
"eq_num": "(20)"
}
],
"section": "Classification by the Bayes Decision Rule",
"sec_num": "5.4."
},
{
"text": "where Note that new input unidentified syllables can update the estimated means and variances (MLE) which are closer to the true unknown means and variances, and hence the Bayes decision rule will become a more accurate classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification by the Bayes Decision Rule",
"sec_num": "5.4."
},
{
"text": "{x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification by the Bayes Decision Rule",
"sec_num": "5.4."
},
{
"text": "Our speech recognition is implemented in a classroom. The data of 10 mandarin digits are created by 10 different male and female students, each pronouncing 10 digits (0-9) once. The mandarin pronunciation for 1 and 7 is almost the same. It is hard to classify these two syllables. In our speech experiments, we use this database to produce the LPCC and obtain a 12x8 matrix of feature values for each syllable. There are totally 100 matrices of feature values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Experiment on Classification of Digits",
"sec_num": "6."
},
{
"text": "The simple dynamic processing algorithm in Section 5 produces 10 matrices of MLE (estimated means and variances). After a known sample of each digit (0,1,...,9) picks up its own matrix of MLE, the 10 matrices are ranked in order from 0 to 9 as fellows: (\u03bc kij ,\u03c3 kij ), i = 1, ..., 12, j = 1, ..., 8, for k = 0, ..., 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "To Learn Means and Variances using Unsupervised Learning",
"sec_num": "6.2."
},
{
"text": "One of 10 students pronounces 10 digits which are considered as 10 known samples (each for one digit) and the other 9 students pronounce 10 digits (90 samples), which are considered as unknown samples. The recognition rates are listed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "To Learn Means and Variances using Unsupervised Learning",
"sec_num": "6.2."
},
{
"text": "The known sample of a digit (0-9) identifies 90 other mixed unknown samples using distance measure from the known sample, i.e., to classify an unknown sample, we select a known sample from 10 known samples which is the closest to the unknown sample to be the unknown sample. Its recognition rates are also listed in Table 1 . From Table 1 , the Bayes decision rule using unsupervised learning gives the higher recognition rate 79%, 22% more than the rate 57% given by the distance measure using one known sample. Table 1 . Recognition rates for 10 digits given by the Bayes decision rule with unsupervised learning to classify 90 unknown samples as compared with the distance measure without unsupervised learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 323,
"text": "Table 1",
"ref_id": null
},
{
"start": 331,
"end": 338,
"text": "Table 1",
"ref_id": null
},
{
"start": 513,
"end": 520,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "(b). Distance Measure from 10 Known Samples",
"sec_num": null
},
{
"text": "- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(b). Distance Measure from 10 Known Samples",
"sec_num": null
},
{
"text": "This paper is the first attempt to use an unsupervised learning for speech recognition. Actually, this paper presents an one-sample speech recognition. An unsupervised learning needs a trmendous amount of unknown samples to learn the unkown parameters of syllables. From Theorem 2, the estimates using unsupervised learning will converge the true parameters and hence, our classifier can adapt itself to a better decision rule by making the use of unknown input syllables for unsupervised learning and will become more and more accurate after the system is put in use. Theoretically, from Theorem 2, our one-sample speech recognition rate will approach to the rate given by supervised learning classifiers if a syllable does not have too many unknown parameters. In our experiments, we only have 9 samples for each syllable (a total of 90 unknown samples after 90 samples are mixed) for unsupervised learning of 96 parameters for each syllable and hence we only obtain 79% accuracy, 22% more than the rate without unsupervised learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusion",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are grateful to the editor and the referees for their valuable suggestions to improve this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Introduction to Statistical Pattern Recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Fukunaga",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Fukunaga, Introduction to Statistical Pattern Recognition, New York: Academic Press, 1990.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Digital Speech Processing",
"authors": [
{
"first": "Sadaoki",
"middle": [],
"last": "Furui",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadaoki Furui, Digital Speech Processing, Synthesis and Recognition, Marcel Dekker, Inc., New York and Basel, 1989.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fundamentals of Speech Recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "Rabiner",
"suffix": ""
},
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Rabiner and B. H. Juang, Fundamentals of Speech Recognition, Prentice Hall, PTR, Englewood Cliffs, New Jersey, 1993.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Spoken Language Processing -A guide to theory, algorithm, and system development",
"authors": [
{
"first": "X",
"middle": [
"D"
],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. D. Huang, A. Acero, and H. W. Hon, Spoken Language Processing -A guide to theory, algorithm, and system development, Prentice Hall, PTR, Upper Saddle River, New Jersey, USA, 2001.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Probabilistic Theory of Pattern Recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "Deroye",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gyorfi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lugosi",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Deroye, L. Gyorfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition, Elsevier, New York, 1996.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Stochastic Approximation in Adaptation, Learning and Pattern Recognition Systems: Theory and Applications",
"authors": [
{
"first": "R",
"middle": [
"L"
],
"last": "Kasyap",
"suffix": ""
},
{
"first": "C",
"middle": [
"C"
],
"last": "Blayton",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Fu",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. L. Kasyap, C. C. Blayton, and K. S. Fu, Stochastic Approximation in Adaptation, Learning and Pattern Recognition Systems: Theory and Applications, J. M. Mendel and K. S. Fu. Eds., New York, Academic, 1970.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Classification, Estimation and Pattern Recognition",
"authors": [
{
"first": "T",
"middle": [
"Y"
],
"last": "Young",
"suffix": ""
},
{
"first": "T",
"middle": [
"W"
],
"last": "Calvert",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Y. Young and T. W. Calvert, Classification, Estimation and Pattern Recognition, New York: Elsevier, 1974.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pattern recognizing stochastic learning automata",
"authors": [
{
"first": "A",
"middle": [
"G"
],
"last": "Barto",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Anandan",
"suffix": ""
}
],
"year": 1985,
"venue": "IEEE Trans. Syst",
"volume": "15",
"issue": "",
"pages": "360--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. G. Barto and P. Anandan, Pattern recognizing stochastic learning automata, IEEE Trans. Syst., Man, Cybern., Vol. SMC-15(May 1985) 360-375.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An empirical Bayes approach to statistics",
"authors": [
{
"first": "H",
"middle": [],
"last": "Robbins",
"suffix": ""
}
],
"year": 1956,
"venue": "Proc. Third Berkeley Symp",
"volume": "1",
"issue": "",
"pages": "157--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Robbins, An empirical Bayes approach to statistics, Proc. Third Berkeley Symp. Math. Statist. Prob., Vol. 1, University of California Press, (1956), 157-163.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A note on margin-based loss function in classification",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Statist. and Pro. Letters",
"volume": "68",
"issue": "1",
"pages": "73--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Lin, A note on margin-based loss function in classification, Statist. and Pro. Letters, 68(1)(2004), 73-81.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Classification on defective items using unidentified samples, Pattern Recognition",
"authors": [
{
"first": "T",
"middle": [
"F"
],
"last": "Li",
"suffix": ""
},
{
"first": "S",
"middle": [
"C"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "38",
"issue": "",
"pages": "51--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.F. Li and S.C. Chang, Classification on defective items using unidentified samples, Pattern Recogni- tion, 38(2005), 51-58.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Empirical Bayes sequential estimation of the means",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Karunamuni",
"suffix": ""
}
],
"year": 1992,
"venue": "Sequential Anal",
"volume": "11",
"issue": "1",
"pages": "37--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. J. Karunamuni, Empirical Bayes sequential estimation of the means, Sequential Anal., 11(1)(1992), 37-53.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Non-parametric empirical Bayes procedure",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sarhan",
"suffix": ""
}
],
"year": 2003,
"venue": "Reliability Engineering and System",
"volume": "80",
"issue": "2",
"pages": "115--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Sarhan, Non-parametric empirical Bayes procedure, Reliability Engineering and System, 80(2)(2003), 115-122.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Empirical Bayes estimation in exponential reliability model",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sarhan",
"suffix": ""
}
],
"year": 2003,
"venue": "Applied Math. and Computation",
"volume": "135",
"issue": "2",
"pages": "319--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Sarhan, Empirical Bayes estimation in exponential reliability model, Applied Math. and Computa- tion, 135(2)(2003), 319-332.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bayes empirical Bayes approach to estimation of the failure rate in exponential distribution",
"authors": [
{
"first": "T",
"middle": [
"F"
],
"last": "Li",
"suffix": ""
}
],
"year": 2002,
"venue": "Commu.-Stat. Meth",
"volume": "31",
"issue": "9",
"pages": "1457--1465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. F. Li, Bayes empirical Bayes approach to estimation of the failure rate in exponential distribution, Commu.-Stat. Meth., 31(9)(2002), 1457-1465.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Empirical Bayes minimax estimators of matrix normal means",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ghosh",
"suffix": ""
}
],
"year": 1991,
"venue": "J. Multivariate Anal",
"volume": "38",
"issue": "2",
"pages": "306--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Ghosh, Empirical Bayes minimax estimators of matrix normal means, J. Multivariate Anal., 38(2)(1991), 306-318.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Minimax hierarchical empirical Bayes estimation in multivariate regression",
"authors": [
{
"first": "S",
"middle": [
"D"
],
"last": "Oman",
"suffix": ""
}
],
"year": 2002,
"venue": "J. Multivariate Anal",
"volume": "80",
"issue": "2",
"pages": "285--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. D. Oman, Minimax hierarchical empirical Bayes estimation in multivariate regression, J. Multivariate Anal., 80(2)(2002), 285-301.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Empirical Bayes prediction intervals in a normal regression model: higher order asymptotics",
"authors": [
{
"first": "R",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "J",
"middle": [
"K"
],
"last": "Ghosh",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mukerjee",
"suffix": ""
}
],
"year": 2003,
"venue": "Statist. and Pro. Letters",
"volume": "63",
"issue": "2",
"pages": "197--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Basu, J. K. Ghosh, and R. Mukerjee, Empirical Bayes prediction intervals in a normal regression model: higher order asymptotics, Statist. and Pro. Letters, 63(2)(2003), 197-203.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Empirical Bayes estimation and its superiority for two-way classification model",
"authors": [
{
"first": "L",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Statist. and Prob. Letters",
"volume": "63",
"issue": "2",
"pages": "165--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Wei and J. Chen, Empirical Bayes estimation and its superiority for two-way classification model, Statist. and Prob. Letters, 63(2)(2003), 165-175.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Nonparametric empirical Bayes estimation of the matrix parameter of the Wishart distribution",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pensky",
"suffix": ""
}
],
"year": 1999,
"venue": "J. Multivariate Anal",
"volume": "69",
"issue": "2",
"pages": "242--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Pensky, Nonparametric empirical Bayes estimation of the matrix parameter of the Wishart distri- bution, J. Multivariate Anal., 69(2)(1999), 242-260.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A general approach to nonparametric empirical Bayes estimation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pensky",
"suffix": ""
}
],
"year": 1997,
"venue": "Statistics",
"volume": "29",
"issue": "1",
"pages": "61--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Pensky, A general approach to nonparametric empirical Bayes estimation, Statistics, 29(1)(1997), 61-80.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bounds for robust maximum likelihood and posterior consistency in compound mixture state experiments",
"authors": [
{
"first": "S",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gilliland",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hannan",
"suffix": ""
}
],
"year": 1999,
"venue": "Statist. and Prob. Letters",
"volume": "41",
"issue": "3",
"pages": "215--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Majumder, D. Gilliland, and J. Hannan, Bounds for robust maximum likelihood and posterior con- sistency in compound mixture state experiments, Statist. and Prob. Letters, 41(3)(1999), 215-227.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Empirical Bayes estimation for truncation parameters",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2000,
"venue": "J. Statistical Planning and Inference",
"volume": "84",
"issue": "1",
"pages": "111--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Ma, Empirical Bayes estimation for truncation parameters, J. Statistical Planning and Inference, 84(1)(2000), 111-120.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A Bayes Empirical Bayes decision rule for classification",
"authors": [
{
"first": "T",
"middle": [
"F"
],
"last": "Li",
"suffix": ""
},
{
"first": "T",
"middle": [
"C"
],
"last": "Yen",
"suffix": ""
}
],
"year": 2005,
"venue": "Communications in Statistics-Theory and Methods",
"volume": "34",
"issue": "",
"pages": "1137--1149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. F. Li and T. C. Yen, A Bayes Empirical Bayes decision rule for classification, Communications in Statistics-Theory and Methods, 34(2005), 1137-1149.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Information Theory and Statistics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kullback",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kullback, Information Theory and Statistics, Gloucester, MA: Peter Smith, 1973.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Mathematical Statistics",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Wilks",
"suffix": ""
}
],
"year": 1962,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.S. Wilks, Mathematical Statistics, New York: John Wiley and Son, 1962.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Comparison of parametric representation for monosyllabic word recognition in continously spoken sentences",
"authors": [
{
"first": "S",
"middle": [
"B"
],
"last": "Davis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mermelstein",
"suffix": ""
}
],
"year": 1980,
"venue": "IEEE. Trans. Acoust., Speech, Signal Processing",
"volume": "28",
"issue": "4",
"pages": "357--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. B. Davis and P. Mermelstein, Comparison of parametric representation for monosyllabic word recog- nition in continously spoken sentences, IEEE. Trans. Acoust., Speech, Signal Processing, 28(4)(1980), 357-366.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A note on Mel frequency cepstra in speech recognition",
"authors": [
{
"first": "T",
"middle": [
"F"
],
"last": "Li",
"suffix": ""
}
],
"year": 2006,
"venue": "Department of Applied Mathematics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. F. Li, A note on Mel frequency cepstra in speech recognition, Department of Applied Mathematics, Chung Hsing University, Taichung, Taiwan, (2006).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Linear Prediction and the Spectral Analysis of Speech",
"authors": [
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Makhoul and J. Wolf, Linear Prediction and the Spectral Analysis of Speech, Bolt, Baranek, and Newman, Inc., Cambridge, Mass., Rep. 2304, 1972.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Linear prediction: a tutorial review",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1975,
"venue": "Proc. IEEE",
"volume": "63",
"issue": "",
"pages": "561--580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Makhoul, Linear prediction: a tutorial review, Proc. IEEE, 63(4)(1975), 561-580.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A study of LPC analysis of speech in additive noise",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tierney",
"suffix": ""
}
],
"year": 1980,
"venue": "IEEE Trans. Acoust. Speech Signal Process",
"volume": "28",
"issue": "4",
"pages": "389--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Tierney, A study of LPC analysis of speech in additive noise, IEEE Trans. Acoust. Speech Signal Process., 28(4)(1980), 389-397.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Speech recognition of mandarin monosyllables",
"authors": [
{
"first": "T",
"middle": [
"F"
],
"last": "Li",
"suffix": ""
}
],
"year": 2003,
"venue": "Pattern Recognition",
"volume": "36",
"issue": "",
"pages": "2712--2721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. F. Li, Speech recognition of mandarin monosyllables, Pattern Recognition, 36(2003), 2712-2721.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A stochastic approximation method",
"authors": [
{
"first": "H",
"middle": [],
"last": "Robbins",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Monro",
"suffix": ""
}
],
"year": 1951,
"venue": "Ann. Math. Statist",
"volume": "22",
"issue": "",
"pages": "400--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Robbins and S. Monro, A stochastic approximation method, Ann. Math. Statist., 22(1951), 400-407.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Stochastic Approximation and Nonlinear Regression",
"authors": [
{
"first": "A",
"middle": [],
"last": "Abert",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gardner ; Cambridge",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [
"I T"
],
"last": "",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Abert and L. Gardner, Stochastic Approximation and Nonlinear Regression, Cambridge, MA, M.I.T., 1967.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Ann. R. Stat. Soc",
"volume": "39",
"issue": "",
"pages": "1--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, Ann. R. Stat. Soc. 39(1977), 1-35.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "On the convergence properties of the EM algorithm",
"authors": [
{
"first": "C",
"middle": [
"F J"
],
"last": "Wu",
"suffix": ""
}
],
"year": 1983,
"venue": "Ann. Stat",
"volume": "11",
"issue": "",
"pages": "95--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. F. J. Wu, On the convergence properties of the EM algorithm, Ann. Stat., 11(1983), 95-103.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "the Kullback-Leibler information number H(\u03bb o , \u03bb o )\u2212H(\u03bb o , \u03bb) \u2265 0 with equality if and only if p(x|\u03bb) = p(x|\u03bb o ) for all x, i.e., H(\u03bb o , \u03bb) has an absolutely maximum value at \u03bb = \u03bb o .",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "in Section 3, a mandarin syllable is represented by a 12x8 matrix of feature values, which tend to be normally distributed. Let x n = (x 1 , ..., x n ) denote n unidentified syllables, where each x m , m = 1, ..., n, denotes a 12x8 matrix of feature values, which are used to learn the means \u00b5 kij , variances \u03c3 2 kij , i = 1, ..., 12, j = 1, ..., 8, k = 1, ..., K, of normal distributions of 12x8 feature values and the prior probabilities \u03b8 k (the probability for a syllable to appear) for K classes of syllables. For large number of classes, the stochastic approximation procedure in Section 4 is not able to estimate the means and variances, because the recursive procedure (16) needs tremendous size of computer memory. For simplicity, we let \u03b8 k = 1/K,",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "ij } denotes the matrix of LPCC of an input unknown syllable. The matrix of LPCC of an unknown syllable is compared with each known syllable c k represented by (\u03bc kij ,\u03c3 kij ). The Bayes rule (20) selects a syllable c k with the least value of l(c k ) from K known syllables to be the input unknown syllable.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "The total 100 samples (10 known samples and 90 unknown samples) are used for finding the matrices of MLE of the means and variances for 10 digits. This experiment is implemented five times, each time for one of five different students whose 10 digit pronunciations are considered as known samples. Note that the only training samples are the only one sample for each digit pronounced by a student and note that the testing samples are the mixed 90 unknown samples of 10 digits pronounced by the other 9 students. Actually, the experiment is a speaker-independent speech recognition. The 10 training samples and the 90 testing samples (90 mixed unknown samples also used for unsupervised learning of parameters) are totally separated.6.3. Speech Classification on the Mixed SamplesIn this study, two different classifiers are used to classify 90 unknown mixed digital samples since 10 digital samples pronounced by one student are already known.(a). Bayes Decision Rule.The estimated means and variances of each digit obtained in (6.2) are placed into the Bayes decision rule(20). The Bayes decision rule classifies 90 mixed samples (except 10 known samples for 10 digits (0-9)).",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "----------------------------------student 1 student 2 student 3 student 4 student 5 average ------------------------------------------------------------------------------------------------------",
"uris": null
},
"TABREF0": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "The measurements of features made on the speech waveform include energy, zero crossings. extrema count, formants, LPC cepstrum (LPCC) and the Mel frequency cepstrum coefficient (MFCC). The LPCC and MFCC are most commonly used for the features to represent a syllable. The LPC method provides a robust, reliable and accurate method for estimating the parameters that characterize the linear, time-varying",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>6.1. Speech Signal Processing.</td></tr><tr><td>The speech signal of a mandarin monosyllable is sampled at 10k Hz. A Hamming window with a width</td></tr><tr><td>of 25.6</td></tr></table>",
"html": null,
"type_str": "table",
"text": "ms is applied every 12.8 ms for our study. A Hamming window with 256 points is used to select the data points to be analyzed. In this study, the 12x8 unknown parameters of features representing a digit are estimated by unsupervised learning. After learning the parameters, there are 10 12x8 matrices of estimates representing 10 digits. For each digit, use one known sample to identify a 12x8 matrix of estimates to represent the digit.",
"num": null
}
}
}
}