Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O01-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:09:41.052050Z"
},
"title": "Pitch Marking Based on an Adaptable Filter and a Peak-Valley Estimation Method",
"authors": [
{
"first": "Jau-Hung",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "Advanced Technology Center, Computer and Communication Research Laboratories",
"institution": "Industrial Technology Research Institute",
"location": {
"addrLine": "Chutung 310",
"country": "Taiwan"
}
},
"email": "chenjh@itri.org.tw"
},
{
"first": "Yung-An",
"middle": [],
"last": "Kao",
"suffix": "",
"affiliation": {
"laboratory": "Advanced Technology Center, Computer and Communication Research Laboratories",
"institution": "Industrial Technology Research Institute",
"location": {
"addrLine": "Chutung 310",
"country": "Taiwan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In a text-to-speech (TTS) conversion system based on the time-domain pitch-synchronous overlap-add (TD-PSOLA) method, accurate estimation of pitch periods and pitch marks is necessary for pitch modification to assure an optimal quality of the synthetic speech. In general, there are two major issues on pitch marking: pitch detection and location determination. In this paper, an adaptable filter, which serves as a bandpass filter, is proposed for pitch detection to transform the voiced speech into a sine-like wave. Based on the sine-like wave, a peak-valley decision method is investigated to determine the appropriate part (positive part and negative part) of the voiced speech for pitch mark estimation. At each pitch period, two possible peaks/valleys are searched and the dynamic programming is performed to obtain the pitch marks. Experimental results indicate that our proposed method performed very well if correct pitch information is estimated.",
"pdf_parse": {
"paper_id": "O01-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "In a text-to-speech (TTS) conversion system based on the time-domain pitch-synchronous overlap-add (TD-PSOLA) method, accurate estimation of pitch periods and pitch marks is necessary for pitch modification to assure an optimal quality of the synthetic speech. In general, there are two major issues on pitch marking: pitch detection and location determination. In this paper, an adaptable filter, which serves as a bandpass filter, is proposed for pitch detection to transform the voiced speech into a sine-like wave. Based on the sine-like wave, a peak-valley decision method is investigated to determine the appropriate part (positive part and negative part) of the voiced speech for pitch mark estimation. At each pitch period, two possible peaks/valleys are searched and the dynamic programming is performed to obtain the pitch marks. Experimental results indicate that our proposed method performed very well if correct pitch information is estimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In past years, the approach of concatenative synthesis has been adopted by many text-to-speech (TTS) systems [1] - [6] . The concatenative synthesis uses real recorded speech segments as the synthesis units and concatenates them together during synthesis.",
"cite_spans": [
{
"start": 109,
"end": 112,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 115,
"end": 118,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Also, the time-domain pitch-synchronous overlap-add (TD-PSOLA) [6] method has been employed to perform prosody modification. This method modifies the prosodic features of the synthesis unit according to the target prosodic information. Generally, the prosodic information of the speech includes pitch (the fundamental frequency), intensity, and duration, etc. For a synthesis scheme based on TD-PSOLA method, it is necessary to obtain a pitch mark for each pitch period in order to assure an optimal quality of the synthetic speech. The pitch mark is a reference point for the overlap of the speech signals.",
"cite_spans": [
{
"start": 63,
"end": 66,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "It is useful to have a speech synthesizer with various voices for speech synthesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Sometimes it is also important for a service-providing company to have a synthesizer with the voice of its own employee or the speaker of its favorite. For conventional TTS systems, however, it is a professional but tedious job to create a new voice. Recently, corpus-based TTS systems have been appreciated which use a large amount of speech segments. Some approaches selected the speech segments as the candidates of synthesis units. Establishing the synthesis units includes speech segmentation, pitch estimation, pitch marking, and so on. However, pitch marking is very labor-intensive among them if there involved no automatic mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In general, there are two major issues on pitch marking: pitch detection and location determination. Compared to pitch detection [7] - [14] , few papers have been presented for pitch marking [15] [16] , which is also a difficult problem because of the great variability of the speech signals. Moulines et al. [15] proposed a pitch-marking algorithm based on the detection of abrupt changes at glottal closure instants. At each period, they assumed that the speech waveform could be represented by the concatenation of the response of two all-pole systems. On the other hand, Kobayashi et al. [16] used dyadic wavelet for pitch marking. The glottal closure instant was detected by searching for a local peak in the wavelet transform of the speech waveform.",
"cite_spans": [
{
"start": 129,
"end": 132,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 135,
"end": 139,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 191,
"end": 195,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 196,
"end": 200,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 309,
"end": 313,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 592,
"end": 596,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we propose a pitch-marking method based on an adaptable filter and a peak-valley estimation method. The block diagram is shown in Fig. 1 . The input signals are constrained to the voiced speech because only the periodic parts are interested. We introduce an adaptable filter, which serves as a bandpass filter, to transform the voiced speech into a sine-like wave. The autocorrelation method is then used to estimate the pitch periods on the sine-like wave. Also, a peak-valley decision method is presented to determine which part of the voiced speech is suitable for pitch mark estimation. The positive part (the speech with positive amplitude) and the negative part (the speech with negative amplitude) are investigated in this method. This is motivated from Fig. 2 ",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 151,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 776,
"end": 782,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The proposed adaptable filter serves as a bandpass filter in which its pass band is from 50 Hz to the detected fundamental frequency, up to 500 Hz, of the voiced speech. The adaptable filter is achieved by the following three steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autocorrelation Method",
"sec_num": null
},
{
"text": "Step 1. It computes the FFT (Fast Fourier Transform) to transform the voiced speech into the frequency domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autocorrelation Method",
"sec_num": null
},
{
"text": "Step 2. The fundamental frequency, f 0 , is detected by searching the first peak of the spectral contour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autocorrelation Method",
"sec_num": null
},
{
"text": "Step 3. The IFFT (Inverse FFT) is invoked over the passband between 50 Hz and f 0 to obtain the filtered speech. An example of the adaptable filter is displayed in Fig. 2 . Panel (a) and (b) shows the waveforms of the original speech and the filtered speech, respectively. It can be seen that the filtered speech is generally a sine-like wave that reveals clear periodicity than that on the original speech waveform. For a frame in the middle of the voiced speech, the spectral contour is depicted in panel (d). Note that the frequency axis is not linearly plotted for the reason of inspecting the first spectral peak. The first peak was found at 168 Hz, which is the fundamental frequency. Finally, the pitch periods are obtained by analyzing the filtered speech using the conventional autocorrelation method.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 170,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Autocorrelation Method",
"sec_num": null
},
{
"text": "From observations, we found that the voiced speech, s [\u2022] , is synchronous with the filtered speech, o [\u2022] , either at peaks or at valleys. For the case illustrated in Fig. 2 (a) ",
"cite_spans": [
{
"start": 54,
"end": 57,
"text": "[\u2022]",
"ref_id": null
},
{
"start": 103,
"end": 106,
"text": "[\u2022]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 168,
"end": 178,
"text": "Fig. 2 (a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Pitch Mark Determination Using a Peak-Valley Decision Method and Dynamic Programming 3-1 Peak-Valley Decision",
"sec_num": "3."
},
{
"text": "where the symbols are defined as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitch Mark Determination Using a Peak-Valley Decision Method and Dynamic Programming 3-1 Peak-Valley Decision",
"sec_num": "3."
},
{
"text": "Once the adoption of the peak or valley has been decided, say peak, the positions of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3-2 Pitch mark determination Based on Dynamic Programming",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") , ( ) , ( ) 1 ( k j g P L L k j d i k i ij i + \u2212 \u2212 = \u2212 , for i=2,\u2026,PN (3) \uf8fe \uf8fd \uf8fc \uf8f3 \uf8f2 \uf8f1 + + = \u2212 \u2212 (2) ) 2 , ( (1), ) 1 , ( min ) ( 1 1 i i i i i A j d A j d j A , for i=2,3,\u2026,PN",
"eq_num": "(4)"
}
],
"section": "3-2 Pitch mark determination Based on Dynamic Programming",
"sec_num": null
},
{
"text": "where PN is the total number of pitch period and j, k=1,2. In Equation 3 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3-2 Pitch mark determination Based on Dynamic Programming",
"sec_num": null
},
{
"text": "The penalty function is introduced here due to the preference of the highest peak.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3-2 Pitch mark determination Based on Dynamic Programming",
"sec_num": null
},
{
"text": "The search path of the dynamic programming is illustrated in Fig. 3 . The peak locations (pitch marks) can be obtained by back tracing the peak sequence corresponding to the smallest value of A i (1) and A i (2) . An example of the results of pitch marking is shown in Fig. 2(c) . Similar procedures described above can be applied to the case of \"valley\". For the voiced speech, the waveforms along with the pitch marks obtained from our pitch-marking program were visually displayed. The pitch marks were then checked and corrected by an experienced person through a friendly interface. For the evaluation of the experiments, we obtained 436 sets of human-labeled pitch marks, denoted as H, which comprises 23868 pitch marks.",
"cite_spans": [
{
"start": 208,
"end": 211,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 61,
"end": 67,
"text": "Fig. 3",
"ref_id": "FIGREF7"
},
{
"start": 269,
"end": 278,
"text": "Fig. 2(c)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "3-2 Pitch mark determination Based on Dynamic Programming",
"sec_num": null
},
{
"text": "The results of the peak-valley decision were verified by human judgment on visual displays. A success rate of 99.1% is obtained (4 of the 436 results were disagreed). For the female speaker, we found that 97.2% of the voiced segments reveal clear periodicity on the negative parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4-2 Performance of the pitch marking method",
"sec_num": null
},
{
"text": "The proposed method generated 23860 pitch marks, denoted as I, without any duplication. The success rate of the pitch marking method is defined as follows: (6) As shown in Table 1 , a success rate of 97.2% is obtained (baseline), in contrast with the 95% and 97% success rates of the methods of [15] and [16] , respectively. However, we found that most of the errors are resulted from the incorrect results of pitch detection.",
"cite_spans": [
{
"start": 156,
"end": 159,
"text": "(6)",
"ref_id": "BIBREF5"
},
{
"start": 295,
"end": 299,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 304,
"end": 308,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "4-2 Performance of the pitch marking method",
"sec_num": null
},
{
"text": "Most of the pitch errors are due to large changes of pitch locating at the boundaries of the voiced speech. Providing correct pitch information, our method leads to a success rate of 99.5%. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4-2 Performance of the pitch marking method",
"sec_num": null
},
{
"text": "In this paper, a preliminary work on pitch marking has been proposed. We present the adaptable filter combined with the autocorrelation method for pitch detection. On the other hand, a peak-valley decision method is introduced to select either the positive or the negative parts for evaluation of pitch mark. Also, a dynamic-programming-based pitch mark determination method is demonstrated where two peaks/valleys are searched at each period. In the experiments, our pitch-marking method achieves 97.2% success rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This paper is a partial result of Project 3XS1B11 conducted by ITRI under sponsorship of the Ministry of Economic Affairs, Taiwan, R.O.C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A diphone synthesis based on time-domain prosodic modifications of speech",
"authors": [
{
"first": "C",
"middle": [],
"last": "Hamon",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Moulines",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Charpentier",
"suffix": ""
}
],
"year": 1989,
"venue": "Proc ICASSP",
"volume": "",
"issue": "",
"pages": "238--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamon, C., E. Moulines, and F. Charpentier, \"A diphone synthesis based on time-domain prosodic modifications of speech,\" in Proc ICASSP, 1989, pp.238-241.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Speech segment network approach for optimization of synthesis unit set",
"authors": [
{
"first": "N",
"middle": [],
"last": "Iwahashi",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
}
],
"year": 1995,
"venue": "Computer Speech and Language",
"volume": "",
"issue": "",
"pages": "335--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iwahashi, N. and Y. Sagisaka, \"Speech segment network approach for optimization of synthesis unit set,\" Computer Speech and Language, 1995, pp.335-352.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Issues in text-to-speech conversion for Mandarin",
"authors": [
{
"first": "C",
"middle": [
"L"
],
"last": "Shih",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "1",
"issue": "",
"pages": "37--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shih, C. L. and R. Sproat, \"Issues in text-to-speech conversion for Mandarin,\" in Computational Linguistics and Chinese Language Processing, vol.1, 1996, pp.37-86.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An RNN-based prosodic information Synthesizer for Mandarin text-to-speech",
"authors": [
{
"first": "S",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Hwang",
"suffix": ""
},
{
"first": "Y",
"middle": [
"R"
],
"last": "Wang",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Trans. on Speech and Audio Processing",
"volume": "6",
"issue": "3",
"pages": "226--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, S. H., S. H. Hwang and Y. R. Wang, \"An RNN-based prosodic information Synthesizer for Mandarin text-to-speech,\" IEEE Trans. on Speech and Audio Processing, Vol. 6, No. 3, 1998, pp. 226-239.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Corpus-based Mandarin speech synthesis with contextual syllabic units based on phonetic properties",
"authors": [
{
"first": "F",
"middle": [
"C"
],
"last": "Chou",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Tseng",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "893--896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chou, F. C. and C. Y. Tseng, \"Corpus-based Mandarin speech synthesis with contextual syllabic units based on phonetic properties,\" in Proc. ICASSP, 1998, pp.893-896.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Diphone synthesis using an overlap-add technique for speech waveforms concatenation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Charpentier",
"suffix": ""
},
{
"first": "M",
"middle": [
"G"
],
"last": "Stella",
"suffix": ""
}
],
"year": 1986,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "2015--2020",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charpentier, F. J. and M. G. Stella, \"Diphone synthesis using an overlap-add technique for speech waveforms concatenation,\" in Proc. ICASSP, 1986, pp. 2015-2020.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Comparative performance study of several pitch detection algorithms",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Cheng",
"suffix": ""
},
{
"first": "A",
"middle": [
"E"
],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "C",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": 1976,
"venue": "IEEE Trans. Acoust., Speech, Signal Processing",
"volume": "",
"issue": "",
"pages": "399--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner, L. R., M. J. Cheng, A. E. Rosenberg, and C. A. McGonegal, \"A Comparative performance study of several pitch detection algorithms,\" IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-24, 1976, pp. 399-417.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On the use of autocorrelation analysis for pitch detection",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1977,
"venue": "IEEE Trans. Acoust., Speech, Signal Processing",
"volume": "25",
"issue": "",
"pages": "24--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner, L. R., \"On the use of autocorrelation analysis for pitch detection,\" IEEE Trans. Acoust., Speech, Signal Processing, Vol. ASSP-25, 1977, pp. 24-33.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Cepstrum pitch determination",
"authors": [
{
"first": "A",
"middle": [
"M"
],
"last": "Noll",
"suffix": ""
}
],
"year": 1967,
"venue": "J. Acoust. Soc. Amer",
"volume": "47",
"issue": "",
"pages": "293--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noll, A. M., \"Cepstrum pitch determination,\" J. Acoust. Soc. Amer., Vol. 47, 1967, pp. 293-309.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The SIFT algorithm for fundamental frequency estimation",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Markel",
"suffix": ""
}
],
"year": 1972,
"venue": "IEEE Trans. Audio Electroacoust",
"volume": "20",
"issue": "",
"pages": "367--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markel, J. D., \"The SIFT algorithm for fundamental frequency estimation,\" IEEE Trans. Audio Electroacoust., Vol. Au-20, 1972, pp. 367-377.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pitch detection with a neural-net classifier",
"authors": [
{
"first": "E",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Cole",
"suffix": ""
},
{
"first": "M",
"middle": [
"P"
],
"last": "Vea",
"suffix": ""
},
{
"first": "F",
"middle": [
"A"
],
"last": "Alleva",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Trans. On Signal Processing",
"volume": "39",
"issue": "2",
"pages": "298--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barnard, E., R. A. Cole, M. P. Vea, and F. A. Alleva, \"Pitch detection with a neural-net classifier,\" IEEE Trans. On Signal Processing, vol. 39, No. 2, 1991, pp. 298-307.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A comparison of a wavelet functions for pitch detection of speech signals",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kadambe",
"suffix": ""
},
{
"first": "G",
"middle": [
"F"
],
"last": "Boudreaux-Bartels",
"suffix": ""
}
],
"year": 1991,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "449--452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kadambe, S., G. F. Boudreaux-Bartels, \"A comparison of a wavelet functions for pitch detection of speech signals,\" in Proc. ICASSP, 1991, pp.449-452.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Colored L-l filters and their application in speech pitch detection",
"authors": [
{
"first": "K",
"middle": [
"E"
],
"last": "Barner",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Trans. On Signal Processing",
"volume": "48",
"issue": "9",
"pages": "2601--2606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barner, K. E., \"Colored L-l filters and their application in speech pitch detection,\" IEEE Trans. On Signal Processing, Vol. 48, No. 9, 2000, pp. 2601-2606.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pitch tracking and tone features for Mandarin speech recognition",
"authors": [
{
"first": "H",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Seide",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "1523--1526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, H. and F. Seide, \"Pitch tracking and tone features for Mandarin speech recognition,\" in Proc. ICASSP, 2000, pp.1523-1526.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A real-time French text-to-speech system generating high-quality synthetic speech",
"authors": [
{
"first": "E",
"middle": [],
"last": "Moulines",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Emerard",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Larreur",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Le Saint Milon",
"suffix": ""
},
{
"first": "L",
"middle": [
"Le"
],
"last": "Faucheur",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Marty",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Charpentier",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sorin",
"suffix": ""
}
],
"year": 1990,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "309--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moulines, E., F. Emerard, D. Larreur, J. L. Le Saint Milon, L. Le Faucheur, F. Marty, F. Charpentier, and C. Sorin, \"A real-time French text-to-speech system generating high-quality synthetic speech,\" in Proc. ICASSP, 1990, pp.309-312.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Wavelet analysis used in text-to-speech synthesis",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sakamoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nishimura",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Trans. on Circuits and Systems-II",
"volume": "45",
"issue": "8",
"pages": "1125--1129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kobayashi, M., M. Sakamoto, T. Saito, Y. Hashimoto, M. Nishimura, and K. Suzuki, \"Wavelet analysis used in text-to-speech synthesis,\" IEEE Trans. on Circuits and Systems-II, Analog and Digital Signal Processing, Vol. 45, No. 8, 1998, pp. 1125-1129.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "(a), which displays an example of waveform having the negative part reveals explicit periodicity. In general, it could synthesize better speech quality if the pitch marks are labeled at the positions of extreme points (peaks and valleys) of the speech. At each pitch period, two possible peaks/valleys are searched. Finally, the pitch marks are obtained by the dynamic programming by calculating the pitch distortion. Block diagram of the proposed pitch-marking method.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Results of the adaptable filter and pitch mark determination. (a) Waveform of the voiced speech with explicit periodicity on the negative part. (b) Waveform of the filtered speech. (c) Detected pitch marks. (d) Spectral contour (note that the frequency axis is not linearly plotted).",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "b), they are synchronous at valleys having explicit periodicity instead of those at peaks.As a result, the pitch marks could be easily determined at the negative part than those at the positive part. In the following, peak-valley decision method calculates two costs bysumming the amplitudes of s[m], where m represents the position of the local extreme point of o[\u2022] over each pitch period:",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Cost estimated at the peaks of o[\u2022]. valley C : Cost estimated at the valleys of o[\u2022]. peak N : Total number of the peaks of o[\u2022]. valley N : Total number of the valleys of o[\u2022]. ] [n Pos peak : Position of the n-th peak of o[\u2022]. ] [n Pos valley : Position of the n-th valley of o[\u2022]. The peak-valley decision is made as follows: If peak C > valley C then the positive part (peak) of s[\u2022] is adopted for the evaluation of pitch mark. Otherwise, the negative part (valley) of s[\u2022] is adopted.",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "pitch marks are determined by picking the peaks of s[\u2022]. For the i-th pitch period, P i , two highest peaks in the corresponding voiced speech are searched. Suppose the highest and the second highest peaks are located at L i1 and L i2 , respectively. It might occur that the second one is absent. For this case, we let L i2 = L i1 . For all the detected peaks, the determination of pitch mark is then performed based on dynamic programming. The distortion of pitch period, d i (j,k), and its accumulation, A i (j), are defined as follows:",
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"uris": null,
"text": "Illustration of the peak-picking search path of the dynamic programming.",
"type_str": "figure"
},
"FIGREF8": {
"num": null,
"uris": null,
"text": "continuous speech database was established which provides the basic synthesis units of our Mandarin Chinese TTS system. This database is composed of 70 phrases and their lengths are between 4 to 6 Chinese characters. It includes an amount of 436 tonal syllables comprising the required 413 basic synthesis units. A native female speaker read them in normal speaking style. The speech signals were then digitized by a 16-bit A/D converter at a 44.1k Hz sampling rate. The syllable segmentation was manually done in order to obtain the precise boundaries of voiced speech and unvoiced speech. The total duration of the 436 voiced speech is about 2.1 minutes. For each syllable, the voiced speech was used to test the proposed methods. The frame size used in the adaptable filter was set to 4096 speech samples (92.8 ms).",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"text": "Success rate of the pitch-marking method.",
"num": null,
"content": "<table><tr><td>Condition</td><td colspan=\"2\">Baseline Using correct pitch</td></tr><tr><td>Success rate</td><td>97.2%</td><td>99.5%</td></tr></table>",
"type_str": "table"
}
}
}
}