Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O04-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:00:17.361740Z"
},
"title": "Improved prosody module in a Text-to-Speech system",
"authors": [
{
"first": "Wen-Wei",
"middle": [],
"last": "Liao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jia-Lin",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The newly-developed prosody module of our text-to-speech (TTS) system is described in the paper. We present two main works on it's establishment and improvement. On the basis of potential factors influencing prosody parameters, inclusive of duration, pitch and intensity, the prosody model is built as groundwork of this module which is superior to the former rule-based one in generation of natural prosody. In addition, due to the current model's flaw in prediction of the pitch contour, we further employ an technique named \"Soft Template Markup Language\"(STEM-ML) to improve the smoothness of intonation which has the crucial influence on the naturalness of synthetic speech. Results of the evaluation indicate that the new prosody model is precise enough to predict reliable prosody parameters' values and with the STEM-ML technique, the prosody module can further yield 14.75% reduction in the root mean square (RMS) error of the predicted pitch contour. In order to produce natural-sounding synthetic speech, the generation of prosody plays a key role and is a difficult issue yet. Outperforming rule-based method [13][14] which was employed in our system previously, the newly-built statistical model based on sum-of-products approach with key factors affecting prosody [7][8][9][10][11] can predict more accurate values of prosody parameters. And in general, the intonation which is characterized by the pitch contour seems more crucial to the naturalness and intelligibility of synthesized speech in comparison with other prosody elements such as duration, intensity etc [6]. Nevertheless, the pitch contour generated by our current prosody model is still short of smoothness. As a result, we further concentrate our work on this problem. Based on the F0 (fundamental frequency) mean value predicted by the current prosody model, an technique named STEM-ML [2][3][4][5] is adopted to overcome this shortcoming. In the evaluation phrase, we prove that this technique can help to reduce the difference between the predicted and observed pitch contours, which means that a more natural intonation is achieved. The paper is organized as follows. In the chapter 2, we present the prosody modeling in our system, The chapter 3 reports STEM-ML technique and the result of implementation. The conclusion is described in the chapter 4.",
"pdf_parse": {
"paper_id": "O04-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "The newly-developed prosody module of our text-to-speech (TTS) system is described in the paper. We present two main works on it's establishment and improvement. On the basis of potential factors influencing prosody parameters, inclusive of duration, pitch and intensity, the prosody model is built as groundwork of this module which is superior to the former rule-based one in generation of natural prosody. In addition, due to the current model's flaw in prediction of the pitch contour, we further employ an technique named \"Soft Template Markup Language\"(STEM-ML) to improve the smoothness of intonation which has the crucial influence on the naturalness of synthetic speech. Results of the evaluation indicate that the new prosody model is precise enough to predict reliable prosody parameters' values and with the STEM-ML technique, the prosody module can further yield 14.75% reduction in the root mean square (RMS) error of the predicted pitch contour. In order to produce natural-sounding synthetic speech, the generation of prosody plays a key role and is a difficult issue yet. Outperforming rule-based method [13][14] which was employed in our system previously, the newly-built statistical model based on sum-of-products approach with key factors affecting prosody [7][8][9][10][11] can predict more accurate values of prosody parameters. And in general, the intonation which is characterized by the pitch contour seems more crucial to the naturalness and intelligibility of synthesized speech in comparison with other prosody elements such as duration, intensity etc [6]. Nevertheless, the pitch contour generated by our current prosody model is still short of smoothness. As a result, we further concentrate our work on this problem. Based on the F0 (fundamental frequency) mean value predicted by the current prosody model, an technique named STEM-ML [2][3][4][5] is adopted to overcome this shortcoming. In the evaluation phrase, we prove that this technique can help to reduce the difference between the predicted and observed pitch contours, which means that a more natural intonation is achieved. The paper is organized as follows. In the chapter 2, we present the prosody modeling in our system, The chapter 3 reports STEM-ML technique and the result of implementation. The conclusion is described in the chapter 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In consideration of severe limitation in the resource afforded by some applications in need of speech response, we choose to develop one storage-saving TTS system which has functioned successfully in our spoken dialogue system. Accordingly, the acoustic inventory used in our system is simply composed of about four hundred base syllable units whose duration and pitch contour will be modified with the algorithm called Pitch-Synchronous Overlap-Add (PSOLA) [1] [12] in the synthesizing phrase.",
"cite_spans": [
{
"start": 458,
"end": 461,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 462,
"end": 466,
"text": "[12]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In general, prosody mainly consists of duration, pitch, intensity of the spoken unit which is one syllable in terms of Mandarin. Besides, the break between units is one of it's important elements as well. Therefore, one utterance's prosody can be regarded as the elaborate composition of these four perceivable characteristics. And the variation in prosody stem from a lot of factors in different dimensions which can be observed in the real speech corpus such as the syllable's position in the sentence, lexical tone even the speaker's emotion and so on. Furthermore the complex interactions between factors further lead to another difficulty in designing the prosody model. As a result, in addition to inferring the reliable factors influencing the prosody, to model the interactions between factors intelligently is also a challenge in this work. .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prosody modeling",
"sec_num": "2."
},
{
"text": "The potential factors affect one characteristic simultaneously and have additive, multiplicative or repulsive interactions . Thus, it's troublesome to derive their eventual combined effect on the characteristic. However, for the purpose of assuring that the basically reasonable value for the characteristic can be preserved, one major factor in possession of dominant influence are elected to build the base model while the remaining minor factors take charge to constitute sub-models. In other words, under this framework, the base model provides fundamental value for the characteristic and sub-models act on this base value (BV for short) through the mechanism modeling their interaction to obtain the ultimate characteristic value (CV for short).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base model and sub-models",
"sec_num": "2.1.1"
},
{
"text": "In order that this concept of modeling can be put into practice concretely, the training sample for sub-models, namely the CV of each syllable has to be normalized by it's corresponding BV beforehand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ratio of characteristic value to base value (RCB)",
"sec_num": "2.1.2"
},
{
"text": "Thus, pre-processed CV is computed as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ratio of characteristic value to base value (RCB)",
"sec_num": "2.1.2"
},
{
"text": "CV = RCB (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BV",
"sec_num": null
},
{
"text": "In brief, the ultimate objective of the mechanism devised here is to make combined effect of minor factors quantized to one RCB value used as the multiplier of the BV. The interactions of minor factors are modeled by the approach of sum-of-products and the predicted CV is computed as follows. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mechanism",
"sec_num": "2.1.3"
},
{
"text": "We infer seven potential factors crucial to the characteristics in prosody.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factors",
"sec_num": "2.1.4"
},
{
"text": "Those are listed and described briefly as below. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factors",
"sec_num": "2.1.4"
},
{
"text": "Accordingly., four kinds of base models and seven kinds of sub-models will be established in light of these factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "identities",
"sec_num": "32"
},
{
"text": "Recorded by a single female speaker, the speech corpus contains 3657 sentences (70000 syllables;about 7 hours) with moderate intonation and constant speaking rate. In terms of linguistics ,the properly-designed one has enough coverage to tackle diverse variability of prosody.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "2.2.1"
},
{
"text": "Among these sentences, around 3200 ones are used as training data and the rest of them are reversed for the purpose of evaluation. The syllable boundaries in the waveform are further calibrated manually after aligned by the automatic speech recognizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "2.2.1"
},
{
"text": "The distortion rate (DR) is defined to measure the precision of predicted value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "2.2.2"
},
{
"text": "O P O DR \u2212 = (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "2.2.2"
},
{
"text": "where O is the occurrence's CV and P is the predicted CV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "2.2.2"
},
{
"text": "Accordingly, the objective function is defined as average DRs of all occurrences in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "2.2.2"
},
{
"text": "\u2211 = i i DR N O 1 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "2.2.2"
},
{
"text": "where N is the number of training samples. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "2.2.2"
},
{
"text": "In this section ,the characteristic models, inclusive of duration, pitch and intensity are discussed in terms of the related factors and precision. And as for the break characteristic, we straightforwardly give each type of break an empirical length instead of building the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristic model",
"sec_num": "2.3"
},
{
"text": "This characteristic means the time for which one syllable endures in the utterance. Since the boundaries between syllables are demarcated precisely by hand in our speech corpus, it is straightforward to calculate the syllable's duration. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Duration",
"sec_num": "2.3.1"
},
{
"text": "Each syllable's duration in the corpus needs to be normalized by the utterance's speaking rate (SR) which is estimated as: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaking rate",
"sec_num": null
},
{
"text": "\u2211 = SylN i BSi i D D SylN SR 1 (6 ) where i D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaking rate",
"sec_num": null
},
{
"text": "Pitch here means the one syllable's pitch contour which is depicted with F0 (fundamental frequency) computed at a constant frame rate. In our task, this characteristic is discussed in two separate aspects, namely the pitch contour 's F0 mean (FM for short) and F0 shape. The former can leave the each syllable's pitch contour in a proper level and the later considerably concerns it's smoothness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitch",
"sec_num": "2.3.2"
},
{
"text": "In this chapter, we only concentrate discussion on the F0 mean. In the other hand, one technique named STEM-ML is adopted to deal with F0 shape. This work will be reported in next chapter. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitch",
"sec_num": "2.3.2"
},
{
"text": "\u2211 = SylN i Tonei i F F SylN FMR 1 (7) where i F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitch",
"sec_num": "2.3.2"
},
{
"text": "The evaluation set consists of 300 stentences, exclusive of the sentence in the training set and the precision of the characteristic models are evaluated with DR defined in (3) . The results are shown in the Table 1 .",
"cite_spans": [
{
"start": 173,
"end": 176,
"text": "(3)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.4"
},
{
"text": "Duration 11.35% Pitch 5.6% Intensity 1.98% Table 1 . The preciosion of characteristic models.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Precision",
"sec_num": null
},
{
"text": "The prosody model developed in the previous chapter establishes the groundwork for the prosody module of our TTS system. However, since it merely aims at assuring the accuracy of F0 mean without putting emphasis on the F0 shape, the predicted pitch contour lacks smoothness. For the sake of this drawback , we proceed to employ an model devised by Kochanski, G. P. et al. and called STEM-ML that is abbreviated from \"Soft Template Mark-up Language\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Template Mark-up Language (STEM-ML)",
"sec_num": "3."
},
{
"text": "It is a tagging system which computes the pitch contour in light of a set of tags serving to interpret the variation in the pitch contour more humanly. In order to make the artificial pitch contour closer to the real one, the mechanism of model has to comply with the constraints actually existing in the human uttering process. Thus, each tag concretely takes effect by imposing constraints on prediction of the pitch curve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Template Mark-up Language (STEM-ML)",
"sec_num": "3."
},
{
"text": "As a result, the pitch curve is eventually generated by the model on condition that those constraints come to a compromise. In fact, such compromise can be considered to be the result of tradeoff between two events with reversal interaction, namely effort and error. The effort term stands for physiological energy consumed in the uttering processing and the error one means the communication error rate caused under the current effort. Obviously, they behave contrary to each other. With more effort, the uttering can achieve more accurate expression on words while the error results from little effort spent on uttering. In conclusion, the model can be also thought to predict the pitch curve with the goal of minimizing the sum of effort and error caused in the uttering process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Template Mark-up Language (STEM-ML)",
"sec_num": "3."
},
{
"text": "Soft templates consists of pitch contours of four lexical tones (tone1,tone2,tone3 tone4) and the neutral tone (graphed in Figure 1 ).Since the syllable's tone shape varies considerably due to the affection from syllables nearby, five templates aren't apparently equal to express such variability .",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Soft templates",
"sec_num": "3.1.1"
},
{
"text": "However, the adjective, \"Soft\" significantly implies that their shapes are allowed to change properly (see Figure 2 ). Consequently, these templates with the elastic property can form smoother pitch contour. is bended under control of the model and turns out to be the one (cross line) with tilt in the front part.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Soft templates",
"sec_num": "3.1.1"
},
{
"text": "The tags function as adjustable parameters of the model. Each kind of tag governs the pitch curve's variability in one certain dimension. For instance, the tag smooth determines the permissible velocity of change in pitch values and the priority over one pitch curve's shape and F0 mean is dependent on the tag syllable-type . Thus, the tags have the critical influence on the generated pitch curve's look and should be given proper values so that the one can has good quality. The estimation of tags will be reported in the section 3.3. 10 kinds of tags in total are used in our work as listed below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags",
"sec_num": "3.1.2"
},
{
"text": "Moreover, to account for the more detailed pitch curve's variation inside one word, the tag syllable-strength is specially given a distinct value depending on the syllable's position inside the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "max, min, base, range, add, slope, smooth, pdroop, adroop syllable-type, syllable-strength",
"sec_num": null
},
{
"text": "As the case for the sub-model SInW, this actually leads to 15 kinds of syllable-strength tags considered in the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "max, min, base, range, add, slope, smooth, pdroop, adroop syllable-type, syllable-strength",
"sec_num": null
},
{
"text": "Based on the templates and tags, the process of calculating the pitch curve mainly includes two steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculation of pitch contour",
"sec_num": "3.2"
},
{
"text": "The first step purposes to prepare the plain templates assembling a prototype of the pitch curve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step1",
"sec_num": null
},
{
"text": "1. Select the templates according to each syllable's tone among five basic templates as mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step1",
"sec_num": null
},
{
"text": "The templates have to be modified to conform to the desired duration and F0 mean predicted by the prosody model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "In this step, the tags start to be applied in the calculation along with ready templates. The constraints on generation of the pitch curve are realized by translating the tags to a number of conditional equations with pitch instants (F0) as unknown variables to be solved. One tag can brings in one A real case for the syllable's pitch curve (dot line) and phrase's one (dash line) generated by the model is plotted in Figure 4 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 419,
"end": 427,
"text": "Figure 4",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Step2",
"sec_num": null
},
{
"text": "equation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step2",
"sec_num": null
},
{
"text": "We estimate the tags by data fitting with the objective to minimize root mean square (RMS) error of the predicted F0 in comparison with the observed F0 in the data. The development data set composed of 300 sentences is designed to cover enough occurrences for each kind of tag and templates. Similarly, Levenberg-Marquardt algorithm with numerical differentiation is employed in this task. In addition, the number of pitch samples per syllable in the data is normalized to a constant and the syllable's un-voiced position is excluded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3.3.1"
},
{
"text": "The process of minimization ends in RMS error that is equal to 16.16 (Hz ) One example of fitting results is shown in Figure5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.2"
},
{
"text": "\u9748 \u6d3b \u8abf \u5ea6 \u98db \u6a5f \u73ed \u6b21 \u6216 \u6d3e \u9063 \u5c08 \u6a5f \u4f86 \u63a5 \u904b \u50d1 \u6c11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.2"
},
{
"text": ". A example of one utterance's simulated pitch curve (dot line) along with the real one (dash line)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fig5",
"sec_num": null
},
{
"text": "in the data-fitting result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fig5",
"sec_num": null
},
{
"text": "The evaluation data set is the same to one in the chapter 2 and the prosody model is used as the baseline of this task. In the baseline, the templates are unvaried in the shape but shifted to have the F0 mean predicted by the prosody model. The accuracy of the pitch contour generated by the model is measured by the RMS error of predicted F0 .The result is shown in the Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 371,
"end": 378,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "Prosody model (baseline) 19.46 (Hz)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "Prosody model + STEM-ML 16.59 (Hz) Table 2 . The RMS F0 error of the pitch contour generated by the prosody model and prosody model + STEM-ML.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "The result indicates that based on the prosody model, this technique can further reduce 14.75% RMS error of F0 in the predicted pitch contour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
}
],
"back_matter": [
{
"text": "In this paper, we successively report two works on the development of the prosody module in our TTS system, Firstly, the prosody model based on the framework of base models and sub-models and sum-of-products approach has been proven to have the capability of predicting reliable prosody parameters' values. Furthermore, the employment of the STEM-ML technique further bring in the improvement in the smoothness of the intonation which the prosody model originally lacks In order to raise the accuracy of the prosody model, the refinement of the mechanism in the modeling should be necessary . Besides, we consider expanding the types of STEM-ML tags defined in our system to generate more natural and lively intonation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Pitch Synchronous Waveform Processing Techniques for Text-to-Speech Synthesis Using Diphones",
"authors": [
{
"first": "E",
"middle": [],
"last": "Moulines",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Charpentier",
"suffix": ""
}
],
"year": 1990,
"venue": "Speech Communication",
"volume": "9",
"issue": "",
"pages": "453--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moulines, E. and Charpentier, F. Pitch Synchronous Waveform Processing Techniques for Text-to-Speech Synthesis Using Diphones. Speech Communication 9, 453-467, 1990.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Prosody modeling with soft templates",
"authors": [
{
"first": "G",
"middle": [
"P"
],
"last": "Kochanski",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kochanski, G. P. and Shih, C., \"Prosody modeling with soft templates,\" accepted by Speech Communication.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic modeling of Chinese intonation in continuous",
"authors": [
{
"first": "G",
"middle": [
"P"
],
"last": "Kochanski",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of EUROSPEECH 2001",
"volume": "",
"issue": "",
"pages": "911--914",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kochanski, G. P. and Shih, C., \"Automatic modeling of Chinese intonation in continuous,\" in Proceedings of EUROSPEECH 2001, pp.911-914.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Stem-ml: Language independent prosody description",
"authors": [
{
"first": "P",
"middle": [],
"last": "Grep",
"suffix": ""
},
{
"first": "Chilin",
"middle": [],
"last": "Kochanski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 6 th International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grep P. Kochanski and Chilin Shih, \"Stem-ml: Language independent prosody description,\" in Proceedings of the 6 th International Conference on Spoken Language Processing, Beijing, China, 2000.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Chinese tone modeling with stem-ml",
"authors": [
{
"first": "Chilin",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"P"
],
"last": "Kochanski",
"suffix": ""
}
],
"year": 2000,
"venue": "ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chilin Shih and Greg P. Kochanski, \"Chinese tone modeling with stem-ml,\" in ICSLP, Beijing, China, 2000",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Which is more Important in a concatenative Text To Speech System -Pitch, Duration or Spectral Discontinuity ?",
"authors": [
{
"first": "M",
"middle": [],
"last": "Plumpe",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Meredith",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the third ESCA/COCOSDA Workshop on Speech Synthesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Plumpe, M., Meredith, S. Which is more Important in a concatenative Text To Speech System - Pitch, Duration or Spectral Discontinuity ?, Proceedings of the third ESCA/COCOSDA Workshop on Speech Synthesis, Jenolan, Autralia, Nov. 25-29, 1998",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Assignment of segmental duration in text-to-speech synthesis",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Van Santen",
"suffix": ""
}
],
"year": 1994,
"venue": "Computer, Speech and Language",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Santen, J. P. H. Assignment of segmental duration in text-to-speech synthesis. Computer, Speech and Language, 8, 1994.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilingual Text-To-Speech Synthesis: The Bell Labs Approach",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Multilingual Text-To-Speech Synthesis: The Bell Labs Approach, Richard Sproat, editor, Kluwer Academic Publishers, 1998.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Prosodic modeling in text-to-Speech synthesis",
"authors": [
{
"first": "J",
"middle": [],
"last": "Van Santen",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of EuroSpeech'97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. van Santen, \"Prosodic modeling in text-to-Speech synthesis\", Proceedings of EuroSpeech'97, KN-19,Rhodes 1997.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Modeling phone duration: Application to Catalan TTS",
"authors": [
{
"first": "A",
"middle": [],
"last": "Febrer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Padrell",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bonafonte",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Third ESCA/COCOSDA Workshop on Speech Synthesis. Jenolan Caves",
"volume": "",
"issue": "",
"pages": "43--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Febrer, A.; Padrell, J.; & Bonafonte, A. 1998. Modeling phone duration: Application to Catalan TTS. Proceedings of the Third ESCA/COCOSDA Workshop on Speech Synthesis. Jenolan Caves, Australia, 43-46.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cantonese text-to-speech synthesis using sub-syllable units",
"authors": [
{
"first": "K",
"middle": [
"M"
],
"last": "Law",
"suffix": ""
},
{
"first": "Tan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 7th European Conference on on Speech Communication and Technology",
"volume": "2",
"issue": "",
"pages": "991--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.M. Law and Tan Lee, \"Cantonese text-to-speech synthesis using sub-syllable units\", in Proceedings of the 7th European Conference on on Speech Communication and Technology, Vol.2, pp.991 -994, Aalborg, Denmark, September 2001.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The synthesis rules in a chinese text-to-speech system",
"authors": [
{
"first": "L",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Tseng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ouh-Young",
"suffix": ""
}
],
"year": 1989,
"venue": "IEEE trans. Acoust., speech, signal Processing",
"volume": "37",
"issue": "",
"pages": "1309--1320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.S.Lee, C.Y. Tseng, and M. Ouh-Young, \"The synthesis rules in a chinese text-to-speech system\", IEEE trans. Acoust., speech, signal Processing, Vol. 37, pp. 1309-1320, 1989.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A method for the solution of certain problems in least sqrares",
"authors": [
{
"first": "K",
"middle": [],
"last": "Levenberg",
"suffix": ""
}
],
"year": 1944,
"venue": "Quart. Applied Math",
"volume": "2",
"issue": "",
"pages": "164--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Levenberg, \"A method for the solution of certain problems in least sqrares,\" Quart. Applied Math., vol. 2, pp. 164-168, 1944.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A algorithm for least-squares estimation of non-linear parameters",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marquardt",
"suffix": ""
}
],
"year": 1963,
"venue": "SIAM J. Applied Math",
"volume": "11",
"issue": "",
"pages": "431--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marquardt, \"A algorithm for least-squares estimation of non-linear parameters,\" SIAM J. Applied Math, vol. 11, pp.431-441, 1963.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "the parameter of the base model for the characteristic i and SMN is the numbers of sub-models for the characteristic i and Si is the parameter of the sub-model i and Cij is a coefficient associating the sub-model i and sub-model j and mij and nij represent the stress of sub-model i and sub-model j respectively.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "and one neutral tone Left and right context tones (LRCT) 175 levels: 25(bi-tone) + 125(tri-tone) The syllable's position in the word and the syllable number of one word (SInW) 15 levels: 1+2+3+4+5 (longest word length) The word's position in the phrase (WInP) inter-syllable pause, inter-word pause, comma, period Right context initial (RCIt)",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Both base models and sub-models have only one parameter. The parameters of base models and sub-models are calculated as the average of observed occurrences's CVs and RCBs which correspond to them in the training corpus respectively. is the parameter of the model and oi is observed occurrence whose value is either RCB or CV depending on whether the model is a sub-model or base model and oN is the number of occurrences.Coefficients and StressFirstly, the initial values of coefficients and stress are calculated by means of linear least square error and given value 1 respectively. And furthermore beginning with the initial values, Levenberg-Marquardt algorithm[15][16] with numerical differentiation is employed to find the optimal values of these parameters with the goal of minimizing the objective function O defined in(4).",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "LRCT 2. SInW 3. WInP 4. RCBk 5. RCIt",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "BS 2. LRCT 3. SinW 4. WinP 5. RCBkFM rateEach syllable's FM in the corpus needs to be normalized by the utterance's FM rate (FMR) which is estimated as:",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "5 tone templates. Fig2. A example of how one syllable is effected by it's neighbor. Succeeding to Tone3, the original shape of Tone1 template (dot line)",
"uris": null,
"num": null
},
"FIGREF6": {
"type_str": "figure",
"text": "or one group of equations. For example, the slope tag which controls the pitch's increasing or decreasing rate in the phrase level yields the equation Pt+1 -Pt = S where P and S are the pitch variable and the slope tag's value respectively. These joint conditional equations can be written as the form Ax = b where A is matrix with rows composed of the coefficients in the left-hand side of all equations and x is a vector containing the unknown variables and the b is a vector with elements consisting of the right-hand side of all equations .Consequently, the pitch values of the curve are the solution of the algebraic problem Ax = b. Furthermore, the calculation proceeds in the order of phrase level and the syllable level. Riding on the phrase's pitch curve solved firstly, the syllable's one is calculated . The process in the phrase level aims at deciding the trend of the whole resultant pitch curve which is finally obtained in the syllable level. Step2 is illustrated inFigure 3.",
"uris": null,
"num": null
},
"FIGREF7": {
"type_str": "figure",
"text": "The procedure for calculating pitch contour which is carried out in the order of the phrase and syllable levels .",
"uris": null,
"num": null
},
"FIGREF8": {
"type_str": "figure",
"text": "A example of the pitch contour generated by the model.",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"4\">Power rate</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">is estimated as:</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>PR</td><td>=</td><td colspan=\"2\">SylN 1</td><td colspan=\"2\">\u2211 SylN i</td><td colspan=\"2\">P</td><td colspan=\"3\">Tonei i P</td><td>(9)</td></tr><tr><td>where</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"12\">i P is power of one syllable (named Si), Tonei P</td><td>is average power of Tonei in the corpus and</td><td>is</td></tr><tr><td colspan=\"12\">syllable number in one utterance.</td></tr><tr><td colspan=\"2\">Power</td><td>=</td><td>log</td><td>10</td><td>(</td><td>i \u2211</td><td colspan=\"2\">N X</td><td>i</td><td>2</td><td>)</td><td>(8)</td></tr><tr><td>where</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"12\">Xi and N are the sample value and number of samples respectively.</td></tr><tr><td colspan=\"3\">Factors</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"5\">Major LT</td><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"9\">Minor 1. BS 2. LRCT 3. SInW 4. WInP 5. RCBk</td></tr></table>",
"text": "Each syllable's power in the corpus needs to be normalized by the utterance's power rate (PR) which",
"num": null
}
}
}
}