{ "paper_id": "O13-5000", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:26.693731Z" }, "title": "Computational Linguistics & Chinese Language Processing Aims and Scope", "authors": [ { "first": "Chia-Hui", "middle": [], "last": "Chang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chia-Ping", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jia-Ching", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jian-Cheng", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Hsun-Wen", "middle": [], "last": "Chiu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "chiuhsunwen@gmail.com" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "jason.jschang@gmail.com" }, { "first": "Jim", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "You-Shan", "middle": [], "last": "Chung", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Ju-Yun", "middle": [], "last": "Cheng", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Yi-Chin", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Chung-Hsien", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\u694a\u5584\u9806", "middle": [ "\uf02a" ], "last": "\u3001\u5433\u4e16\u5f18", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\uf02a", "middle": [], "last": "\u3001\u9673\u826f\u5703", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\uf02b", "middle": [], "last": "\u3001\u90b1\u5b8f\u6607", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\uf02b", "middle": [], "last": "\u3001\u694a\u4ec1\u9054 \uf02b", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Shan-Shun", "middle": [], "last": "Yang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Shih-Hung", "middle": [], "last": "Wu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Liang-Pu", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hung-Sheng", "middle": [], "last": "Chiu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ren-Dar", "middle": [], "last": "Yang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "on Oct. 4-5, 2013. ROCLING is the leading and most comprehensive conference on computational linguistics and speech processing in Taiwan, bringing together researchers, scientists and industry participants to present their work and discuss recent trends in the field. This special issue presents extended and reviewed versions of eight papers meticulously selected from ROCLING 2013, including 4 natural language processing papers and 4 speech processing papers. The first paper done at Chaoyang University of Technology focuses entailment analysis for improving Chinese textual entailment recognition. By considering four special cases, the RTE system can be significantly improved. The second and third papers are applications of statistical machine translation in Chinese spelling check and English grammatical error correction. Both papers are research work from National Tsing Hua University. The fourth paper, from Academia Sinica, studies Chinese noun-noun compound and concludes that two nouns are either linked by semantic roles assigned by events or by static relations including) including meronymy, conjunction, and the host-attribute-value relation. The fifth paper is from National Cheng Kung University. This research work employs Hidden Markov Model-based synthesis approach to generate Mandarin songs with arbitrary lyrics and melody in a certain pitch range. The sixth paper is a joint work from National Taiwan University, National Tsing Hua University, and Chang Gung University. This research uses speech recognition and assessment to automatically find the potentially problematic utterances for preparing a Taiwanese speech corpus. The seventh paper is done by National Taiwan University of Science and Technology. For LMR-Mapping based voice conversion, this work places a histogram-equalization module and a target frame selection module immediately before and after the LMR based mapping. The eigth paper, from National Chi Nan University, presents a noise-robust speech feature representation method in speech recognition. This method applies linear predictive coding on the time series of cepstral coefficients and then removes the linear prediction error component. The Guest Editors of this special issue would like to thank all of the authors and reviewers for sharing their knowledge and experience at the conference. We hope this issue provide for directing and inspiring new pathways of NLP and spoken language research within the research field.", "pdf_parse": { "paper_id": "O13-5000", "_pdf_hash": "", "abstract": [ { "text": "on Oct. 4-5, 2013. ROCLING is the leading and most comprehensive conference on computational linguistics and speech processing in Taiwan, bringing together researchers, scientists and industry participants to present their work and discuss recent trends in the field. This special issue presents extended and reviewed versions of eight papers meticulously selected from ROCLING 2013, including 4 natural language processing papers and 4 speech processing papers. The first paper done at Chaoyang University of Technology focuses entailment analysis for improving Chinese textual entailment recognition. By considering four special cases, the RTE system can be significantly improved. The second and third papers are applications of statistical machine translation in Chinese spelling check and English grammatical error correction. Both papers are research work from National Tsing Hua University. The fourth paper, from Academia Sinica, studies Chinese noun-noun compound and concludes that two nouns are either linked by semantic roles assigned by events or by static relations including) including meronymy, conjunction, and the host-attribute-value relation. The fifth paper is from National Cheng Kung University. This research work employs Hidden Markov Model-based synthesis approach to generate Mandarin songs with arbitrary lyrics and melody in a certain pitch range. The sixth paper is a joint work from National Taiwan University, National Tsing Hua University, and Chang Gung University. This research uses speech recognition and assessment to automatically find the potentially problematic utterances for preparing a Taiwanese speech corpus. The seventh paper is done by National Taiwan University of Science and Technology. For LMR-Mapping based voice conversion, this work places a histogram-equalization module and a target frame selection module immediately before and after the LMR based mapping. The eigth paper, from National Chi Nan University, presents a noise-robust speech feature representation method in speech recognition. This method applies linear predictive coding on the time series of cepstral coefficients and then removes the linear prediction error component. The Guest Editors of this special issue would like to thank all of the authors and reviewers for sharing their knowledge and experience at the conference. We hope this issue provide for directing and inspiring new pathways of NLP and spoken language research within the research field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "\u7b49 \u4eba\u63d0\u51fa\u53cd\u5411\u7684\u77db\u76fe\u5b57\u8a5e\u5c0d\u9f4a\uff0c\u6709\u6548\u6539\u5584\u4e4b\u524d\u6b63\u5411\u5b57\u8a5e\u5c0d\u9f4a\u5bb9\u6613\u7522\u751f\u7684\u8aa4\u5224\u554f\u984c\u3002 \u5728 \u6587 \u5b57 \u860a \u6db5 \u8b58 \u5225 \u7684 \u63a8 \u8ad6 \u6642 \uff0c \u5404 \u7a2e \u8a9e \u7fa9 \u8cc7 \u8a0a \u8207 \u4e0a \u4e0b \u6587 \u8cc7 \u8a0a \u662f \u5fc5 \u8981 \u7684 \u8655 \u7406 \u3002 \u96d6\u7136 NTCIR-10 RITE-2 (Watanabe et al., 2013) \u4efb\u52d9\u76ee\u7684\u5728\u65bc\u8a55\u9451\u5404\u7a2e\u8a9e\u7fa9/\u4e0a\u4e0b\u6587\u8655\u7406\u7cfb\u7d71\uff0c \u4f46\u9019\u6709\u4e00\u500b\u554f\u984c\u6ce8\u91cd\u5177\u9ad4\u8a9e\u8a00\u73fe\u8c61\u7684\u7814\u7a76\u662f\u4e0d\u5bb9\u6613\u7684\u9054\u6210\u3002\u5728 RITE-2 \u65e5\u6587\u5b50\u4efb\u52d9\u7684\u8cc7\u6599 \u96c6\u4e2d\u542b\u5305\u6db5\u4e86\u8b58\u5225 T1 \u548c T2 \u4e4b\u524d\u860a\u6db5\u95dc\u4fc2\u5fc5\u9700\u7684\u8a9e\u8a00\u5b78\u73fe\u8c61\uff0c\u5f9e\u5169\u985e(BC)\u5b50\u4efb\u52d9\u8cc7\u6599\u96c6 \u4e2d\u64f7\u53d6\u53e5\u5b50\u5c0d\u4f5c\u70ba\u6a23\u672c\uff0c\u5efa\u7acb\u5177\u6709\u8a9e\u8a00\u5b78\u73fe\u8c61\u7684\u53e5\u5b50\uff0c\u5c07\u9019\u6a23\u7684\u53e5\u5b50\u52a0\u5165\u8cc7\u6599\u96c6\u4e2d\u4f5c\u70ba \u55ae\u5143\u6e2c\u8a66\u4f7f\u7528\u3002\u55ae\u5143\u6e2c\u8a66\u6578\u64da\u76f8\u7576\u65bc\u591a\u985e(BC)\u4efb\u52d9\u8cc7\u6599\u96c6\u7684\u4e00\u500b\u5b50\u96c6\u3002\u96d6\u7136\u9019\u500b\u8cc7\u6599\u96c6 \u4e26\u4e0d\u591a\uff0c\u4f46\u60a8\u53ef\u4ee5\u5c07\u5b83\u7528\u65bc\u5404\u7a2e\u7814\u7a76\uff0c\u5305\u62ec\u5206\u6790 RITE \u8cc7\u6599\u96c6\u4e2d\u51fa\u73fe\u7684\u8a9e\u8a00\u5b78\u554f\u984c\uff0c\u8a55 \u6e2c\u6bcf\u4e00\u500b\u8a9e\u8a00\u5b78\u73fe\u8c61\u8b58\u5225\u6b63\u78ba\u5ea6\u548c\u70ba\u5404\u7a2e\u8a9e\u8a00\u5b78\u73fe\u8c61\u7684\u5206\u985e\u5668\u8a13\u7df4\u8207\u6e2c\u8a66\u8cc7\u6599\u3002 3. \u7cfb\u7d71\u67b6\u69cb \u6211\u5011\u7684\u7cfb\u7d71\u7684\u7cfb\u7d71\u6d41\u7a0b\u5716\u5982\u5716 1 \u6240\u793a\uff0c\u57fa\u672c\u7d44\u6210\u90e8\u5206\"\u9810\u8655\u7406\"\u3001\"\u65b7\u8a5e\"\u3001\"\u4e2d\u6587\u7c21 \u7e41\u8f49\u63db\"\u3001\"\u7279\u6b8a\u985e\u578b\u5206\u985e\"\u3001\u500b\u5225\" SVM \u5206\u985e\u5668\"\u548c\u6700\u5f8c\"\u7d50\u679c\u6574\u5408\"\u3002 et al., 2002) \u4e09\u500b\u7279\u9ede\u3002Bleu (Zhou et al., 2006) ", "cite_spans": [ { "start": 499, "end": 512, "text": "et al., 2002)", "ref_id": null }, { "start": 523, "end": 542, "text": "(Zhou et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Chinese spell checking is a task involving automatically detecting and correcting typographical errors (typos), roughly corresponding to misspelled words in English. In this paper, we define typos as Chinese characters that are misused due to shape or phonological similarity. Liu et al. (2011) shows that people tend to unintentionally generate typos that sound similar (e.g., *\u63aa\u6298 [cuo zhe] and \u632b\u6298 [cuo zhe]), or look similar (e.g., *\u56fa\u96e3 [gu nan] and \u56f0\u96e3 [kun nan]). On the other hand, some typos found on the Web (such as forums Data-driven, statistical spell checking approaches appear to be more robust and perform better. Statistical methods tend to use a large monolingual corpus to create a language model to validate the correction hypotheses. Considering\"\u5fc3\u662f\"[xin shi], the two characters\"\u5fc3\" [xin] and\"\u662f\"[shi] are a bigram with high frequency in a monolingual corpus, so we may determine that\"\u5fc3\u662f\"[xin shi] is not a typo after all.", "cite_spans": [ { "start": 277, "end": 294, "text": "Liu et al. (2011)", "ref_id": "BIBREF8" }, { "start": 798, "end": 803, "text": "[xin]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we propose a model that combines rule-based and statistical approaches to detect errors and generate the most appropriate corrections in Chinese text. Once an error is identified by the rule-based detection model, we use the statistic machine translation (SMT) model (Koehn, 2010) to provide the most appropriate correction. Rule-based models tend to ignore context, so we use SMT to deal with this problem. Our model treats spelling correction as a kind of translation, where typos are translated into correctly spelled words according to the translation probability and language model probability. Consider the same case\"\u5fc3\u662f\u5f88 \u91cd\u8981\u7684\u3002\"[xin shi hen zhong yao de]. The string\"\u5fc3\u662f\"[xin shi] would not be incorrectly replaced with\"\u5fc3\u4e8b\"[xin shi] because we would consider\"\u5fc3\u662f\" [ xin shi] to be highly probable, according to the language model. The rest of the paper is organized as follows. We present the related work in the next section. Then, we describe the proposed model for automatically detecting the spelling errors and correcting the found errors in Section 3. Section 4 and Section 5 present the experimental data, results, and performance analysis. We conclude in Section 6.", "cite_spans": [ { "start": 282, "end": 295, "text": "(Koehn, 2010)", "ref_id": "BIBREF6" }, { "start": 781, "end": 782, "text": "[", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Chinese spell checking is a task involving automatically detecting and correcting typos in a given Chinese sentence. Previous work typically takes the approach of combining a confusion set and a language model. A rule-based approach depends on dictionary knowledge and a Integrating Dictionary and Web N-grams for Chinese Spell Checking 19 confusion set, a collection set of certain characters consisting of visually and phonologically similar characters. On the other hand, statistical-based methods usually use a language model, which is generated from a reference corpus. A statistical language model assigns a probability to a sentence of words by means of n-gram probability to compute the likelihood of a corrected sentence. Chang (1995) proposed a system that replaces each character in the sentence based on the confusion set and estimates the probability of all modified sentences according to a bigram language model built from a newspaper corpus before comparing the probability before and after substitution. They used a confusion set consisting of pairs of characters with similar shape that were collected by comparing the original text and its OCR results. Similarly, Zhuang et al. (2004) proposed an effective approach using OCR to recognize a possible confusion set. In addition, Zhuang et al. (2004) also used a multi-knowledge based statistical language model, the n-gram language model, and Latent Semantic Analysis. Nevertheless, the experiments by Zhuang et al. (2004) seem to show that the simple n-gram model performs the best.", "cite_spans": [ { "start": 731, "end": 743, "text": "Chang (1995)", "ref_id": "BIBREF0" }, { "start": 1183, "end": 1203, "text": "Zhuang et al. (2004)", "ref_id": "BIBREF17" }, { "start": 1297, "end": 1317, "text": "Zhuang et al. (2004)", "ref_id": "BIBREF17" }, { "start": 1470, "end": 1490, "text": "Zhuang et al. (2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "In recent years, Chinese spell checkers have incorporated word segmentation. The method proposed by Huang et al. (2007) incorporates the Sinica Word Segmentation System (Ma & Chen, 2003) to detect typos. With a character-based bigram language model and the rule-based methods of dictionary knowledge and confusion sets, the method determines whether the word is a typo or not. There are many more systems that use word segmentation to detect errors. For example, in Hung and Wu (2009) , the given sentence is segmented using a bigram language model. In addition, the method also uses a confusion set and common error templates manually edited and provided by the Ministry of Education in Taiwan (MOE, 1996) . Chen and Wu (2010) modified the system proposed by Hung and Wu (2009) by combining statistic-based methods and a template matching module generated automatically to detect and correct typos based on the language model. Closer to our method, Wu et al. (2010) adopted the noise channel model, a framework used both in spell checkers and in machine translation systems. The system combined a statistic-based method and template matching with the help of a dictionary and a confusion set. They also used word segmentation to detect errors, but they did not use existing word segmentation, as Huang et al. (2007) did, because that might regard a typo as a new word. They used a backward longest first approach to segment sentences with an online dictionary sponsored by MOE (MOE, 2007) , and a templates with a confusion set provided by Liu et al. (2009) . The system also treated Chinese spell checking as a kind of translation by combining the template module and translation module to get a higher precision or recall.", "cite_spans": [ { "start": 100, "end": 119, "text": "Huang et al. (2007)", "ref_id": "BIBREF3" }, { "start": 169, "end": 186, "text": "(Ma & Chen, 2003)", "ref_id": "BIBREF9" }, { "start": 466, "end": 484, "text": "Hung and Wu (2009)", "ref_id": null }, { "start": 695, "end": 706, "text": "(MOE, 1996)", "ref_id": "BIBREF12" }, { "start": 709, "end": 727, "text": "Chen and Wu (2010)", "ref_id": "BIBREF16" }, { "start": 760, "end": 778, "text": "Hung and Wu (2009)", "ref_id": null }, { "start": 950, "end": 966, "text": "Wu et al. (2010)", "ref_id": "BIBREF16" }, { "start": 1297, "end": 1316, "text": "Huang et al. (2007)", "ref_id": "BIBREF3" }, { "start": 1474, "end": 1489, "text": "MOE (MOE, 2007)", "ref_id": "BIBREF11" }, { "start": 1541, "end": 1558, "text": "Liu et al. (2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "In our system, we also treat the Chinese spell checking problem as machine translation, but we use a different method of handling word segmentation to detect typos and translation 20 Jian-cheng Wu et al. model , where typos are translated into correctly spelled words.", "cite_spans": [ { "start": 194, "end": 209, "text": "Wu et al. model", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "In this section, we describe our solution to the problem of Chinese spell checking. In the error detection phase, the given Chinese sentence is segmented into words. (Section 3.1) The detection module then identifies and marks the words that may be typos. (Section 3.2) In the error correction phase, we use the statistical machine translation (SMT) model to translate the sentences containing typos into correct ones (Section 3.3). In the rest of this section, we describe our solution to this problem in more detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "Unlike English text, in which sentences are sequences of words delimited by spaces, Chinese texts are represented as strings of Chinese characters (called Hanzi) with word delimiters. Therefore, word segmentation is a pre-processing step required for many Chinese NLP applications. In this study, we also perform word segmentation to reduce the search space and probability of false alarms. After segmentation, sequences of two or more singleton words are considered likely to contain an error. Nevertheless, over-segmentation might lead to falsely identified errors, which we will describe in Section 3.2. Considering the sentence\"\u9664\u4e86\u8981\u6709 \u8d85\u4e16\u4e4b\u624d\uff0c\u4e5f\u8981\u6709\u5805\u5b9a\u7684\u610f\u5fd7\"[chu le yao you chao shi zhi cai, ye yao you jian ding de yi zhi], the sentence is segmented into\"\u9664\u4e86/\u8981/\u6709/\u8d85\u4e16/\u4e4b/\u624d/ \uff0c/\u4e5f/\u8981/\u6709/\u5805\u5b9a/\u7684/\u610f\u5fd7.\"The part\"\u8d85\u4e16\u4e4b\u624d\"[chao shi zhi cai] of the sentence is over-segmented and runs the risk of being identified as containing a typo. To solve the problem of over-segmentation, we used additional lexical items to reduce the chance of generating false alarms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified Chinese Word Segmentation System", "sec_num": "3.1" }, { "text": "Motivated by the observation that a typo often causes over-segmentation in the form of a sequence of single-character words, we target the sequences of single-character words as candidates for typos. To identify the points of typos, we take all n-grams consisting of single-character words in the segmented sentence into consideration. In addition to a Chinese dictionary, we also include a list of web-based n-grams to reduce false alarms due to the limited coverage of the dictionary. When a sequence of singleton words is not found in the dictionary or in the web-based character n-grams, we regard the n-gram as containing a typo. For example,\"\u68ee\u6797 \u7684 \u82b3 \u591a \u7cbe\"[sen lin de fang duo jing] is segmented into consecutive singleton words: bigrams such as \"\u7684 \u82b3\"[de fang], and\"\u82b3 \u591a\"[fang duo] and trigrams such as\"\u7684 \u82b3 \u591a\"[de fang duo] and\"\u82b3 \u591a \u7cbe\"[fang duo jing] are all considered as candidates for typos since those n-grams are not found in the reference list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Detection", "sec_num": "3.2" }, { "text": "Integrating Dictionary and Web N-grams for Chinese Spell Checking 21", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Detection", "sec_num": "3.2" }, { "text": "Once we generate a list of candidates of typos, we attempt to correct typos using a statistical machine translation model to translate typos into correct words. When given a candidate, we first generate all correction hypotheses by replacing each character of the candidate typo with similar characters, one character at a time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "Take the candidate\"\u6c23\u4efd\"[qi fen] as example, the model generates all translation hypotheses according to a visually and phonologically confusion set. Table 1 shows some translation hypotheses. The translation hypotheses then are validated (or pruned from the viewpoint of SMT) using the dictionary. ", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "The translation probability tp is a probability indicating how likely a typo is to be translated into a correct word. tp of each correction translation is calculated using the following formula: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6c23\u82ac \u6c23\u6c1b", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "10 = 0 ( ) ( , )", "eq_num": "log (" } ], "section": "\u6c23\u82ac \u6c23\u6c1b", "sec_num": null }, { "text": "\uf067 \uf0e6 \uf0f6 \uf03d \uf02a \uf0e7 \uf0f7 \uf02d \uf0e8 \uf0f8 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6c23\u82ac \u6c23\u6c1b", "sec_num": null }, { "text": "where freq(trans) is the frequency of translation, freq(candi) is the frequency of the candidate, and \u03b3 is the weight of different error types: visual or phonological.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6c23\u82ac \u6c23\u6c1b", "sec_num": null }, { "text": "Take\"\u6c23\u4efd\"[qi fen] from\"\u4e0d/\u4e00\u6a23/\u7684/\u6c23/\u4efd\"[bu/yi yang/de/qi/fen] for instance, the translations with non-zero tp after filtering are shown in Table 2 . Only two translations are possible for this candidate:\"\u6c23\u61a4\"[qi fen] and\"\u6c23\u6c1b\"[qi fen]. We use a simple, publicly available decoder written in Python to correct potential spelling errors found in the detection module. The decoder reads one Chinese sentence at a time and attempts to \"translate\" the sentence into a correctly spelled one. The decoder translates monotonically without reordering the Chinese words and phrases using two models -the translation probability model and the language model. These two models read from a data directory containing two text files containing a translation model in GIZA++ (Och & Ney, 2003) format and a language model in SRILM (Stolcke et al., 2011) format. These two models are stored in memory for quick access.", "cite_spans": [ { "start": 750, "end": 767, "text": "(Och & Ney, 2003)", "ref_id": "BIBREF13" }, { "start": 805, "end": 827, "text": "(Stolcke et al., 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 2", "ref_id": "TABREF17" } ], "eq_spans": [], "section": "\u6c23\u82ac \u6c23\u6c1b", "sec_num": null }, { "text": "The decoder invokes the two modules to load the translation and language models and decodes the input sentences, storing the result in output. The decoder computes the probability of the output sentences according to the models. It works by summing over all possible ways that the model could have generated the corrected sentence from the input sentence. Although, in general, covering all possible corrections in the translation and language models is intractable, a majority of error instances can be \"translated\" effectively via the translation model and the language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translations", "sec_num": null }, { "text": "Our systems were designed to provide wide coverage spell checking for Chinese. As such, we trained our systems using a dictionary, a compiled corpus, and Web scale n-grams. We evaluated our systems on the sentence level. Finally, we used an annotated dataset to provide human judges the ability to evaluate the quality of error detection and correction. In this section, we first present the details of data sources used in training (Section 4.1). Then, Section 4.2 describes the test data. Section 4.3 describes the systems evaluated and compared. The evaluation metrics for the performance of the systems are reported in Section 4.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4." }, { "text": "To train our model, we used several corpora, including Sinica Chinese Balanced Corpus, TWWaC (Taiwan Web as Corpus), a Chinese dictionary, and a confusion set. We describe the data sets in more detail below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "4.1" }, { "text": "\"Academia Sinica Balanced Corpus of Modern Chinese,\" or \"Sinica Corpus,\" is the first balanced Chinese corpus with part-of-speech tags (Huang et al., 1996) . The current size of the corpus is about 5 million words. Texts are segmented according to the word segmentation standard proposed by the ROC Computational Linguistic Society. Each segmented word is tagged with its part of speech. We used the corpus to generate the frequency of bigrams, trigrams, and 4-grams for training the translation model and to train the n-gram language model.", "cite_spans": [ { "start": 135, "end": 155, "text": "(Huang et al., 1996)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Sinica Corpus", "sec_num": null }, { "text": "We used TWWaC for obtaining more language information. TWWaC is a corpus gathered from the Web under the .tw domain, containing 1,817,260 Web pages that consist of 30 billion Chinese characters. We use the corpus to generate the frequency of all character n-grams for n = 2, 3, 4 (with frequency higher than 10). Table 3 shows the information of n-grams in Sinica Corpus and TWWaC. 848,193 13,745,743 17,191,359 ", "cite_spans": [ { "start": 382, "end": 411, "text": "848,193 13,745,743 17,191,359", "ref_id": null } ], "ref_spans": [ { "start": 313, "end": 320, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "TWWaC (Taiwan Web as Corpus)", "sec_num": null }, { "text": "From the dictionaries and related books published by Ministry of Education (MOE) of Taiwan, we obtained two lists, one is the list of 64,326 distinct Chinese words (MOE, 1997) 1 , and the other one is the list of 48,030 distinct Chinese idioms 2 . We combined the lists into a Chinese dictionary for validating words with lengths of 2 to 17 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Words and Idioms in a Chinese Dictionary", "sec_num": null }, { "text": "After analyzing erroneous Chinese words, Liu et al. (2011) found that more than 70% of typos were related to the phonologically similar character, about 50% are morphologically similar, and almost 30% are both phonologically and morphologically similar. We used the ratio as the weight for the translation probabilities. In this study, we used two confusion sets generated by Liu et al. (2011) and provided by SIGHAN 7 Bake-off 2013: Chinese Spelling Check Shared Task as a full confusion set, based on loosely similar relation.", "cite_spans": [ { "start": 41, "end": 58, "text": "Liu et al. (2011)", "ref_id": "BIBREF8" }, { "start": 376, "end": 393, "text": "Liu et al. (2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Confusion Set", "sec_num": null }, { "text": "In order to improve the performance, we expanded the sets slightly and also removed some loosely similarly relations. For example, we removed all relations based on non-identical phonologically similarity. After that, we added the similar characters based on similar phonemes in Chinese phonetics, such as\"\u3123\uff0c\u3125\"[en, eng],\"\u3124\uff0c\u3122\" [ang, an] ,\"\u3115\uff0c\u3119\" [shi, si] , and so on. We also modified the similar shape set, so we checked the characters by comparing the characters in Cangjie codes (\u5009\u9821\u78bc) and required strong shape similarly. Two characters differing from each other by at most one symbol in Cangjie code were considered as strongly similar and were retained. For example, the code of\"\u5fb5\"[zheng] and\"\u5fae\" [wei] are strongly similar in shape, since in their corresponding codes\"\u7af9\u4eba\u5c71\u571f\u5927\"and\"\u7af9\u4eba\u5c71 \u5c71\u5927\", differ only in one place.", "cite_spans": [ { "start": 326, "end": 335, "text": "[ang, an]", "ref_id": null }, { "start": 343, "end": 352, "text": "[shi, si]", "ref_id": null }, { "start": 699, "end": 704, "text": "[wei]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Confusion Set", "sec_num": null }, { "text": "We used the official dataset from SIGHAN Bake-off 2013: Chinese Spelling Check to evaluate our systems. This dataset contains two parts: 350 sentences with errors and 350 sentences without errors, extracted from student essays that covered various common errors. The dataset was released in XML format with the information of sentences, wrong position, typos, and correction. A sample is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Data", "sec_num": "4.2" }, { "text": "

\u6211\u770b\u904e\u8a31\u591a\u52c7\u6562\u7684\u4eba\uff0c\u4e0d\u6015\u63aa\u6298\u5730\u596e\u9b25\uff0c\u9019\u7a2e\u7cbe\u795e\u503c\u5f97\u6211\u5011\u5b78\u7fd2\u3002

\u63aa\u6298 \u632b\u6298
", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Data", "sec_num": "4.2" }, { "text": "We found that all of the sentences with errors contain exactly one typo and that most errors were either similar in pronunciation or shape. Therefore, the confusion set was suitable for error correction. We generated the sentence with/without error and the correct answer from XML format. In this data, more than 80% of errors were characters with identical pronunciation, almost 20% of errors were characters with similar shape, and 40% of errors involved both phonological and visual similarity. Hence, we focused on detecting and correcting these two common types of errors in our study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Data", "sec_num": "4.2" }, { "text": "Recall that we propose a system to detect and correct typos in Chinese based broadly on statistical machine translation. We experimented with different resources as kinds of language models to detect typos: dictionary entries, a compiled corpus, and Web corpus. The four detection systems evaluated are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "Integrating Dictionary and Web N-grams for Chinese Spell Checking 25 -Dictionary (DICT): A dictionary is used to detect unregistered words as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Corpus (CORP): A word list from a reference corpus is used to detect unseen words as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Web corpus (WEB): A character n-gram of Web corpus is used to detect unseen n-grams as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Dictionary and Web corpus (DICT+WEB): A dictionary combining a character n-gram of Web corpus is used to detect unregistered words as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "To correct typos, we used a character confusion set to transform the detected typos and generate the \"translation\" hypotheses with translation probability. These hypotheses were pruned using a Chinese dictionary before running the MT decoder in order to reduce the load on the decoder. The scope of this confusion set and the weights associated with translation probability clearly influenced the performance of our system. We evaluated and compared four different confusion set and weight settings. The four correction systems evaluated are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Full confusion set (FULL+WT): A broad confusion set with loosely similar relations in character sound and shape was used to generate mapping from a detected typo to its correction. Different weights were used in modeling probability for sound and shape based mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Confusion set with identical sound (SND+WT): A broad confusion set with identical sounds and loosely similar shape relations was used to generate mapping. Different weights were used in modeling probability for sound and shape based mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Restricted confusion set with identical sound and strong similarly shape (SND+SHP): A broad confusion set with identical sounds and strongly similar shape relations was used to generate mapping. Sound and shape were given the same weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Restricted confusion set with different weights (SND+SHP+WT): A broad confusion set with identical sounds and strongly similar shape relations was used to generate mapping. Different weights were used in modeling probability for sound and shape based mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "To assess the effectiveness of the proposed system, we used test data to experiment with our system. We also exploited several language resources, including TWWaC, Sinica Corpus, a Chinese dictionary, and the confusion set, in the proposed system to detect errors and correct errors. The Chinese Word Segmentation System produces the word segmentation result with the help of a Chinese dictionary to improve the proposed system. To evaluate our system, we used the precision rate and recall rate, which are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "/ Precision C S \uf03d (2) / Recall C N \uf03d", "eq_num": "(3)" } ], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "where N is the number of error characters, S is the number of characters translated by the proposed system, and C is the number of characters translated correctly by the proposed system. We also compute the corresponding F-score as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 Precision Recall Precision Recall F score \uf0b4 \uf0b4 \uf02b \uf02d \uf03d", "eq_num": "(4)" } ], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "In this section, we report the results of the experimental evaluation using the methodology described in the previous section. We evaluated detection, as well as correction, for many systems with different language resources and settings. During this evaluation, we tested our systems on 350 sentences containing at least one typo, provided in SIGHAN Bake-off 2013: Chinese Spelling Check. Table 4 shows the precision, recall, and F-score for four detection systems, while Table 5 shows the same metrics for four correction systems. .70", "cite_spans": [], "ref_spans": [ { "start": 390, "end": 397, "text": "Table 4", "ref_id": "TABREF21" }, { "start": 473, "end": 480, "text": "Table 5", "ref_id": "TABREF22" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "As can be seen in Table 4 , using the Web corpus (WEB) achieves higher precision than the dictionary (DICT) or compiled corpus (CORPUS) with slightly lower recall. Using the dictionary (DICT) leads to the highest recall but slightly lower precision. By combining the dictionary and Web corpus (WEB+DICT), we achieve the best precision, recall, and F-score. Table 5 shows that using the full confusion set with loosely similar sound and shape relation leads to the lowest recall and precision in error correction (FULL). By restricting the sound confusion to identical sound and the shape confusion to strongly similar shape, we can improve precision dramatically, with a small increase in recall (SND and SND+SHP).", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 4", "ref_id": "TABREF21" }, { "start": 357, "end": 364, "text": "Table 5", "ref_id": "TABREF22" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "We can further improve the precision and recall by applying different weights in modeling the probability of sound and shape based hypotheses (SND+SHP+WT). Since typos are more often related to sound confusion than shape, giving higher weight to sound confusion Integrating Dictionary and Web N-grams for Chinese Spell Checking 27 indeed leads to further improvement in both precision and recall. Previous works typically have used only a language model to correct errors, but we compute language model probability and translation probability, resulting in more effective error correction. For this reason, we were placed among the top scoring systems in the SIGHAN Bake-off 2013.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "In order to test whether the system can produce false alarms as rarely as possible, when handling the sentences with typos, we tested our systems on a dataset with an additional 350 sentences without typos. The best performing system (SND+SHP+WT) obtained a precision rate of .91, recall rate of .56, and F-score of .69 in correction. The results show that this system is very robust, maintaining a high precision rate in different situations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "The recall of our system is limited by the dictionary that we used to correct a typo. For example, the typo\"\u4e03\u5f48\u5834\"[qi tan chang], which is detected by the model, is not corrected to\"\u6f06\u5f48\u5834\"[qi tan chang] because it is a new term and not found in the Chinese dictionary we used. To correct such errors, we could use Web-based character n-grams, which are more likely to contain such new terms or productive compounds not found in a dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "Many avenues exist for future research and improvement of our system. For example, new terms can be automatically discovered and added to the Chinese dictionary to improve both detection and correction performance. Part of speech tagging can be performed to provide more information for error detection. Named entities can be recognized in order to avoid false alarms. A supervised statistical classifier can be used to model translation probability more accurately. Additionally, an interesting direction to explore is using Web n-grams in addition to a Chinese dictionary for correcting typos. Yet another direction of research would be to consider errors related to a missing or redundant character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "In summary, we have proposed a novel method for Chinese spell checking. Our approach involves error detection and correction based on the phrasal statistical machine translation framework. The error detection module detects errors by segmenting words and checking word and phrase frequency based on a compiled dictionary and Web corpora. The phonological or morphological spelling errors found then are corrected by running a decoder based on the statistical machine translation model (SMT). The results show that the proposed system achieves significantly better accuracy in error detection and more satisfactory performance in error correction than the state-of-the-art systems. The experimental results show that the method outperforms previous works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "Many people are learning English as a second or foreign language: it is estimated there are 375 million English as a Second Language (ESL) and 750 million English as a Foreign Language (EFL) learners around the world, according to Graddol (2006) . Three times as many people speak English as a second language as there are native speakers of English. Nevertheless, non-native speakers tend to make many kinds of errors in their writing, due to the influence of their native languages (e.g., Chinese or Japanese). Therefore, automatic grammar checkers are needed to help learners improve their writing. In the long run, automatic grammar checkers also can help non-native writers learn from the corrections and \uf02a Department of Computer Science, National Tsing Hua University E-mail: {wujc86; jim.chang.nthu; jason.jschang}@gmail.com gradually gain better command of grammar and word choices.", "cite_spans": [ { "start": 231, "end": 245, "text": "Graddol (2006)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The grammar checkers available in popular word processors have been developed with a focus on native speaker errors, such as subject-verb agreement and pronoun reference. Therefore, these word processors (e.g., Microsoft Word) often offer little or no help with common errors causing problems for English learners (e.g., missing, unnecessary, or wrong article, preposition, and verb form) as described in The Longman Dictionary of Common Errors, second edition (LDOCE) by Heaton and Turton (1996) . The LDOCE is the result of analyzing errors encoded in the Longman Learners' Corpus.", "cite_spans": [ { "start": 472, "end": 496, "text": "Heaton and Turton (1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The LDOCE shows that grammatical errors in learners' writing can either appear in isolation (e.g., the wrong proposition in \"I want to improve my ability of [in] English.\") or consecutively (e.g., the unnecessary preposition immediately followed by a wrong verb form in \"These machines are destroying our ability of thinking [to think].\"). We refer to two or more errors appearing consecutively as serial errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Previous works on grammar checkers either have focused on handling one common type of error exclusively or handling it independently in a sequence of errors. Nevertheless, when an error is not isolated, it is difficult to correct the error when another related error is in the immediate context. In other words, when serial errors occur in a sentence, a grammar checker needs to correct the first error in the presence of the second error (or vice-versa), making correction difficult to achieve. These errors could be corrected more effectively if the corrector recognized them as serial errors and attempted to correct the serial errors at once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Consider an erroneous sentence, \"I have difficulty to understand English.\" The correct sentence should be \"I have difficulty in understanding English.\" It is hard to correct these two errors one by one, since the errors are dependent on each other. Intuitively, by identifying \"difficulty to understand\" as containing serial errors and correcting it to \"difficulty in understanding,\" we can handle this kind of problem more effectively. We present a new system that automatically generates a statistical machine translation model based on a trigram containing a word followed by preposition and verb or by an infinitive in web-scale n-gram data. At run-time, the system generates multiple possible trigrams by changing a word's lexical form and preposition in the original trigram. Example trigrams generated for \"difficulty to understand\" are shown in Figure 1 . The system then ranks all of these generated sentences and use the highest ranking sentence as suggestion.", "cite_spans": [], "ref_spans": [ { "start": 853, "end": 861, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is organized as follows. We review the related work in the next section. Then, we describe our method for automatically learning to translate a sentence that may contain preposition-verb serial errors into a grammatical sentence (Section 3). In our evaluation, we describe how to measure the precision and recall of producing grammatical sentences (Section 4) in an automatic evaluation (Section 5) over a set of marked sentences in a learner corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Grammatical Error Detection (GED) for language learners has been an area of active research. GED involves pinpointing some words in a given sentence as ungrammatical and offering correction if necessary. Common errors in learners' writing include misuse of articles, prepositions, noun number, and verb form. Recently, the state-of-the-art research on GED has been surveyed by Leacock et al. (2010) . In our work, we address serial errors in English learners' writing which are simultaneously related to the preposition and verb form, an aspect that has not been dealt with in most GED research. We also consider the issues of broadening the training data for better coverage and coping with data sparseness when unseen events happen.", "cite_spans": [ { "start": 377, "end": 398, "text": "Leacock et al. (2010)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Although there are over a billion people estimated to be using or learning English as a second or foreign language, common English proofreading tools do not target specifically the most common errors made by second language learners. Many widely-used grammar checking tools are based on pattern matching and at least some linguistic analysis, based on hand-coded grammar rules (Leacock et al., 2010) . In the 1990s, data-driven, statistical methods began to emerge. Statistical systems have the advantage of being more intolerant of ill-form, interlanguage, and unknown words produced by the learners than the rule-based systems. Knight and Chander (1994) proposed a method based on a decision tree classifier to correct article errors in the output of machine translation systems. Articles were selected based on contextual similarity to the same noun phrase in the training data. Atwell (1987) used a language model of a language to represent correct usage for that language. He used the language model to detect errors that tend to have a low language model score.", "cite_spans": [ { "start": 377, "end": 399, "text": "(Leacock et al., 2010)", "ref_id": "BIBREF32" }, { "start": 630, "end": 655, "text": "Knight and Chander (1994)", "ref_id": "BIBREF31" }, { "start": 882, "end": 895, "text": "Atwell (1987)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "More recently, researchers have looked at grammatical errors related to the most common prepositions (9 to 34 prepositions, depending on the percentage of coverage). Eeg-Olofsson and Knuttson (2003) described a rule-based system to detect preposition errors for learners of Swedish. Based on part-of-speech tags assigned by a statistical trigram tagger, 31 rules were written for very specific preposition errors. Tetreault and Chodorow 2008, Gamon et al. (2008) , and Gamon (2010) developed statistical classifiers for preposition error detection. De Felice and Pulman (2007) trained a voted perceptron classifier on features of grammatical relations and WordNet categories in an automatic parse of a sentence. Han et al. (2010) found that a preposition error detection model trained on correct and incorrect usage in a learner corpus works better than using well-formed text in a reference corpus.", "cite_spans": [ { "start": 443, "end": 462, "text": "Gamon et al. (2008)", "ref_id": "BIBREF26" }, { "start": 712, "end": 729, "text": "Han et al. (2010)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In the research area of detecting verb form errors, Heidorn (2000) and Bender et al. (2004) proposed methods based on parse tree and error templates. Lee and Seneff (2008) focused on three cases of verb form errors: subject-verb agreement, auxiliary agreement, and verb complement. The first two types are isolated verb form errors, while the third type may involve serial errors related to preposition and verb. Izumi et al. (2003) proposed a maximum entropy model, using lexical and POS features, to recognize a variety of errors, including verb form errors. Lee and Seneff (2008) used a database of irregular parsing caused by verb form misuse to detect and correct verb form errors. In addition, they also used the Google n-gram corpus to filter out improbable detections. Both Izumi et al. (2003) and Lee and Seneff (2008) obtained a high error correction rate, but they did not report serial errors separately, making comparison with our approach is impossible.", "cite_spans": [ { "start": 52, "end": 66, "text": "Heidorn (2000)", "ref_id": "BIBREF29" }, { "start": 71, "end": 91, "text": "Bender et al. (2004)", "ref_id": "BIBREF20" }, { "start": 150, "end": 171, "text": "Lee and Seneff (2008)", "ref_id": null }, { "start": 413, "end": 432, "text": "Izumi et al. (2003)", "ref_id": "BIBREF30" }, { "start": 561, "end": 582, "text": "Lee and Seneff (2008)", "ref_id": null }, { "start": 782, "end": 801, "text": "Izumi et al. (2003)", "ref_id": "BIBREF30" }, { "start": 806, "end": 827, "text": "Lee and Seneff (2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In a study more closely related to our work, Alla Rozovskaya and Dan Roth (2013) introduced a joint learning scheme to jointly resolve pairs of interacting errors related to subject-verb and article-noun agreements. They showed that the overall error correction rate is improved by learning a model that jointly learns each of these interacting errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Correcting serial errors (e.g., \"I have difficulty to understand English.\") one error at a time in the traditional way may not work very well, but previous works typically have dealt with one type of error at a time. Unfortunately, it may be difficult to correct an error in the context of another error, because an error could only be corrected successfully within the correct context. Besides, such systems need to correct a sentence multiple times, which is time-consuming and more error-prone. To handle serial errors, a promising approach is to treat serial errors together as one single error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "We focus on correcting serial errors in learners' writing using the context of trigrams in a sentence. We train a statistical machine translation model to correct learners' errors of the types of a content word followed by a preposition and a verb using web-scale n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "We are given a sentence S = w 1 , w 2 , \u2026, w n , and web-scale n-gram, webgram. Our goal is to train two statistical machine translation model TM and back-off model TM bo to correct learners' writing. At run-time, trigrams (w i , w i+1 , w i+2 ) in S (i =1, n-2) are matched and replaced using TM and the back-off model TM bo to translate S into a correct sentence T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement:", "sec_num": null }, { "text": "In the rest of this section, we describe our solution to this problem. First, we describe the strategy to train TM (Section 3.2) and TM bo (Section 3.3) using webgrams. Finally, we show how our system corrects a sentence at run-time using TM, TM bo , and a language model LM (Section 3.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement:", "sec_num": null }, { "text": "We attempt to identify trigrams that fit the pattern of serial errors and correction we are dealing with in webngram, and we group the selected trigrams by their content words and verb lemmas. Our learning process is shown in Figure 2 . We assume that, within each group, the low frequency trigrams are probably errors that should be replaced by the most frequent trigram: a one construction per collocation constraint. For example, when expressing \"difficulty\" and \"to understand,\" any NPV constructs with low frequency (e.g., \"difficulty for understanding\" and \"difficulty about understanding\") are erroneous forms of the most frequent trigram \"difficulty in understanding\". Therefore, we generate TM with such phrase to phrase translations accordingly.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 234, "text": "Figure 2", "ref_id": "FIGREF12" } ], "eq_spans": [], "section": "Generating TM", "sec_num": "3.2" }, { "text": "(1) Select trigrams related to serial errors and corrections from webngram (Section 3.2.1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating TM", "sec_num": "3.2" }, { "text": "(2) Group the selected trigrams by the first and last word in the trigrams (Section 3. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating TM", "sec_num": "3.2" }, { "text": "We select four types of trigrams (t 1 , t 2 , t 3 ) from webngram, including noun-prep-verb (NPV), verb-prep-verb (VPV), adj-prep-verb (APV), and adverb-prep-verb (RPV). We then annotate the trigrams with types and lemmas of content words t 1 and t 3 (e.g., \"accused of being 230633\" becomes \"VPV, accuse be, accused of being 230633). Figure 3 shows some sample annotated trigrams. ", "cite_spans": [], "ref_spans": [ { "start": 335, "end": 343, "text": "Figure 3", "ref_id": "FIGREF13" } ], "eq_spans": [], "section": "Select and Annotate Trigrams", "sec_num": "3.2.1" }, { "text": "We then group the trigrams by types, the first words, and the verb lemmas. See Figure 4 for a sample VPV group of trigrams. This step should bring together the trigrams containing serial errors and their correction. Note that we assume certain serial errors will have a correction of the same length here, which is true in most cases.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 87, "text": "Figure 4", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Group Trigrams", "sec_num": "3.2.2" }, { "text": "For each group of annotated trigrams, we then generate phrase and translation pairs with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generate Rules", "sec_num": "3.2.3" }, { "text": "Correcting Serial Grammatical Errors based on N-grams and Syntax 37 probability as follows. Recall that we assume that the higher the count of the trigram, the more likely the trigram is to be correct. So, we generate \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generate Rules", "sec_num": "3.2.3" }, { "text": "l 1 , l 2 , l 3 ||| h 1 , h 2 , h 3 ||| p ,\" where h 1 , h 2 , h 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generate Rules", "sec_num": "3.2.3" }, { "text": "is the trigram with the highest frequency count; l 1 , l 2 , l 3 is one of the trigrams with lower frequency count; and p denotes the probability of l 1 , l 2 , l 3 translating into h 1 , h 2 , h 3 . We define p=(highest frequency count)/(group frequency count).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generate Rules", "sec_num": "3.2.3" }, { "text": "In addition to the surface-level translation model TM, we also build a back-off model as a way of coping with cases where the trigram (t 1 , t 2 , t 3 ) is unseen in TM. The idea is to assume the complement (t 2 , t 3 ) of t 1 tends to be in a certain syntactic form regardless of the verb t 3 , as dictionaries typically would describe the usage of \"accuse\" in terms of \"accuse somebody of doing something.\" Our learning process for TM bo is shown in Figure 9 . (1) Select trigrams with specific forms from Web 1T n-gram", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 460, "text": "Figure 9", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Generating TM bo", "sec_num": "3.3" }, { "text": "(2) Reform trigrams W3 to W3's lexical 3Group the selected trigrams using the first word 4Group the selected trigrams using the first word ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating TM bo", "sec_num": "3.3" }, { "text": "First, we generalize the annotated trigrams (see Section 3.2.1) by replacing the verb form with its part of speech designator (i.e., replace \"accuse\" with VERB, and replace \"accusing\" with VERB-ing).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalize Trigrams", "sec_num": "3.3.1" }, { "text": "In this step, we group the identically transformed trigrams and sum up the frequency counts. See Figure 6 for sample results.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 105, "text": "Figure 6", "ref_id": "FIGREF15" } ], "eq_spans": [], "section": "Sum Counts", "sec_num": "3.3.2" }, { "text": "We then group the trigrams by type and by the first word (context). See Figure 7 for a sample \"accuse P V\" group of trigrams.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 80, "text": "Figure 7", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Group Trigrams of the Same Context", "sec_num": "3.3.3" }, { "text": "For each group of generalized trigrams, we then generate the phrase and translation pair with the probability as described in Section 3.2.3. See Figure 8 for a sample of back-off translations.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 153, "text": "Figure 8", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Generate Rules", "sec_num": "3.3.4" }, { "text": "If one loads TM and TM bo into memory before the decoding process (generating, ranking, and selecting translations), that would take up a lot of memory and slow the process of matching phrases to find translations. Therefore, we generate phrase translations on the fly for the given sentence before decoding. Our process of decoding to correct grammatical errors is shown in Figure 10 .", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 384, "text": "Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Run-time Correction", "sec_num": "3.4" }, { "text": "(1) Tag the input sentence with part of speech information in order to find trigrams that fit the type of serial errors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Run-time Correction", "sec_num": "3.4" }, { "text": "(2) Search TM and generate translations for the input phrases", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Run-time Correction", "sec_num": "3.4" }, { "text": "(3) Search TM bo and generate translations for the input phrases ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Run-time Correction", "sec_num": "3.4" }, { "text": "We use a POS tagger to tag the input sentence, and we identify trigrams (t 1 , t 2 , t 3 ) consisting of a content word followed by a preposition and verb (belonging to the NPV, VPV, APV, or RPV types we described in Section 3.2.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag the Input Ssentence", "sec_num": "3.4.1" }, { "text": "Correcting Serial Grammatical Errors based on N-grams and Syntax 39", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag the Input Ssentence", "sec_num": "3.4.1" }, { "text": "We then search for the group of trigrams (indexed by POS type and t 1 , t 3 ) in TM containing the trigrams (t 1 , t 2 , t 3 ), found in Step 3.4.1. We find the trigram (h 1 , h 2 , h 3 ) with the highest count in that group. With that, we can dynamically add the translation, \"t 1 , t 2 , t 3 ||| h 1 , h 2 , h 3 ||| 1.0\" to the cache of TM in memory (e.g., \"difficulty to understand ||| difficulty in understanding ||| 1.0\") to speed up the subsequent decoding process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search TM and Generate Translation Rules", "sec_num": "3.4.2" }, { "text": "Just like in 3.4.2, we use t 1 and its part of speech p 1 to search TM bo for the generalized trigram group that matches (t 1 , t 2 , t 3 ). We then find the most frequent generalized trigram (h 1 , h 2 , h 3 ) in that group. After that, we need to specialize (h 1 , h 2 , h 3 ) for t 3 by replacing h 3 with the verb form of t 3 for the designator h 3 , resulting in (h 1 , h 2 , h' 3 ). Consider the generalized trigram \"accused of VERB-ing\" and t 3 = \"murder,\" the specialized trigram would be \"accused of murdering.\" Finally, we add \"t 1 , t 2 , t 3 ||| h 1 , h 2 , h' 3 ||| 1.0\" (e.g., \"accused to murder ||| accused of murdering ||| 1.0\") to the cache of TM in memory for the same purpose of speeding up decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search TM bo and Generate Translation Rules", "sec_num": "3.4.3" }, { "text": "Finally, we run a monotone decoder with the cache TM and a language model LM. By default, any word not in TM will be translated into itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decode the Input Sentence without Reordering", "sec_num": "3.4.4" }, { "text": "million unigrams, 3 hundred million bigrams, and around 1 billion trigrams to fivegrams. We obtained 104,537,560 trigrams, containing only words in the General Service List (West, 1954) and Academic Word List (Coxhead, 1999) . These trigrams were further reduced to 4,486,615 entries that fit the patterns of four types of serial errors and corrections: an adjective, noun, verb, or adverb followed by a preposition (or infinitive to) and a verb.", "cite_spans": [ { "start": 173, "end": 185, "text": "(West, 1954)", "ref_id": null }, { "start": 209, "end": 224, "text": "(Coxhead, 1999)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4." }, { "text": "To determine the part of speech of words in the n-gram, we used the most frequent tag of a given word in BNC to tag words in the trigram.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4." }, { "text": "Once we have trained DeeD as described in Section 3, we evaluated its performance using two datasets. The first dataset contained sentences written by an ESL or EFL learner with the serial errors with corrections. The second dataset contained mostly correct sentences in British National Corpus (BNC) with mostly published works written by native, expert speakers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Checking Systems Compared", "sec_num": "4.2" }, { "text": "The first testset is a subset of the Cambridge Learner Corpus, the CLC First Certificate Exam Dataset (CLC/FCE). This dataset contains 1,244 exam essays written by students who took the Cambridge ESOL First Certificate in English (FCE) examination in 2000 and 2001. For each exam script, the CLC/FCE Dataset includes the original text annotated with error, type, and correction. From the 34,893 sentences in the 1,244 exam essays, we extracted 118 sentences that contained the serial errors in question. Other types of errors were replaced with corrections in these sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Checking Systems Compared", "sec_num": "4.2" }, { "text": "The second testset is a random sample of 1000 sentences containing trigrams that fit the error patterns also used to evaluate our system. The four system and testset combinations evaluated are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Checking Systems Compared", "sec_num": "4.2" }, { "text": "-Learner corpus without back-off model (LRN): The proposed system using only the surface-level translation model was tested on the first testset obtained from a learner corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Checking Systems Compared", "sec_num": "4.2" }, { "text": "-Learner corpus with back-off model (LRN-BO): The proposed system with the additional back-off model was tested on the first testset obtained from a learner corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Checking Systems Compared", "sec_num": "4.2" }, { "text": "-BNC without back-off model (BNC): The proposed system using only the surface-level translation model was tested on the first testset obtained from the British National Corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Checking Systems Compared", "sec_num": "4.2" }, { "text": "-BNC with back-off model (BNC-BO): The proposed system without the back-off model was tested on the first testset obtained from the British National Corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Checking Systems Compared", "sec_num": "4.2" }, { "text": "English correction systems usually are compared based on the quality and completeness of correction suggestions. We measured the quality using the metrics of precision, recall, and error rate. For the first testset, we measured precision and recall rates while, for the second Correcting Serial Grammatical Errors based on N-grams and Syntax 41 testset, we measured the error rate (false alarms). We define precision and recall as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Precision = C/S (1) Recall = C/N (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "where N is the number of serial errors, S is the number of corrections our system found, and C is the number of corrections where our system was correct. We also computed the corresponding F-score. Error rate was used in the second dataset described above, and we define the error rate as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Error Rate = E/T (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "where E is the number of corrections our system found (which are all wrong, since we were testing sentences with no errors) and T is the number of sentences tested.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "In this section, we report the results of the evaluation using the dataset and environment mentioned in the previous section. During this evaluation, 118 sentences with serial errors were used to evaluate the two systems: LRN and LRN-BO. Table 1 shows the average precision, recall, and F-score of LRN and LRN-BO. As we can see, LRN performs better in precision, which is reasonable since the back-off model corrects errors without the information of the verb involved. LRN-BO performs better in recall because the back-off model applies when the original model does not cover the case. Overall, LRN-BO performs better in F-score. During this evaluation, 1000 sentences in BNC that fit the pattern of serial errors but in fact do not contain errors, were used to evaluate the same two systems: BNC and BNC-BO. Table 2 shows the average error rate of BNC and BNC-BO. It is not surprising that BNC performs better than BNC-BO, since BNC always makes fewer corrections than BNC-BO. Nevertheless, BNC-BO is only slightly worse than BNC.", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 245, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 810, "end": 817, "text": "Table 2", "ref_id": "TABREF17" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "Many avenues exist for future research and improvement of our system. For example, spell checking can be done before correcting grammatical errors. Context used to \"translate\" the serial errors can be enlarged from one word to two or more words (immediately or closely) preceding the errors. We can also add one more level of backing off for the context word preceding the serial errors: from surface word to lemma or from a proper name to named entity type (PERSON, PLACE, ORGANIZATION). We also can improve the accuracy of part of speech tagging used in applying the back-off model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6." }, { "text": "Additionally, an interesting direction to explore is extending this approach to handle other types of isolated and serial errors commonly found in learners' writing. Yet another direction of research would be to consider corrections resulting in more or fewer words (e.g., one less word as in *spend time for work vs. spend time working). Or, we could also combine n-gram statistics from different types of corpora: a Web-scale corpus, a reference corpus, and a learner corpus. For example, the translation probability can be determined via statistical classifier training on the learner corpus with features extracted from n-grams of multiple corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6." }, { "text": "In summary, we have introduced a new method for correcting serial errors in a given sentence in learners' writing. In our approach, a statistical machine translation model is generated to attempt to translate the given sentence into a grammatical sentence. The method involves automatically learning two translation models based on Web-scale n-grams. The first model translates trigrams containing serial preposition-verb errors into correct ones. The second model is a back-off model for the first model, used in the case where the trigram is not found in the training data. At run-time, the phrases in the input are matched using the translation model and are translated before ranking is performed on all possible translation sentences generated. Evaluation on a set of sentences in a learner corpus shows that the method corrects serial errors reasonably well. Our methodology exploits the state of the art in machine translation, resulting in an effective system that can deal with serial errors at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6." }, { "text": "Noun-noun compounds (henceforth NNC) are compounds composed of two nouns. While the part-of-speech (POS) of NNCs usually is nominal, their interpretations seem so diverse that some researchers even contend that they are completely determined by context (e.g. Dowty, 1979; reviewed in Copestake & Lascarides, 1997) .", "cite_spans": [ { "start": 259, "end": 271, "text": "Dowty, 1979;", "ref_id": null }, { "start": 272, "end": 313, "text": "reviewed in Copestake & Lascarides, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Nevertheless, the majority of researchers believe that there is at least some degree of regularity in NNC interpretation. This regularity is often reported to be at least partially universal as well (Levi, 1978; S\u00d8gaard, 2005) . There are three popular theories along these lines, which are not mutually exclusive. First, there is a limited set of semantic relations between the two component nouns, N1 and N2 (Levi, 1978; as well as computational works that implemented her theory, e.g. Copestake & Lascarides, 1997; S\u00d8gaard, 2005; Copestake & Briscoe, 2005; Huang, 2008) . Second, N1 and N2 are the arguments of an event that bridges them and by which they are assigned semantic roles (Levi, 1978; Leonard, 1984; Ryder, 1994) . Third, the two component nouns sometimes are linked through similarity in some aspect, resulting in metaphorical readings.", "cite_spans": [ { "start": 199, "end": 211, "text": "(Levi, 1978;", "ref_id": "BIBREF39" }, { "start": 212, "end": 226, "text": "S\u00d8gaard, 2005)", "ref_id": "BIBREF42" }, { "start": 410, "end": 422, "text": "(Levi, 1978;", "ref_id": "BIBREF39" }, { "start": 423, "end": 423, "text": "", "ref_id": null }, { "start": 489, "end": 518, "text": "Copestake & Lascarides, 1997;", "ref_id": null }, { "start": 519, "end": 533, "text": "S\u00d8gaard, 2005;", "ref_id": "BIBREF42" }, { "start": 534, "end": 560, "text": "Copestake & Briscoe, 2005;", "ref_id": null }, { "start": 561, "end": 573, "text": "Huang, 2008)", "ref_id": "BIBREF37" }, { "start": 688, "end": 700, "text": "(Levi, 1978;", "ref_id": "BIBREF39" }, { "start": 701, "end": 715, "text": "Leonard, 1984;", "ref_id": "BIBREF38" }, { "start": 716, "end": 728, "text": "Ryder, 1994)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Nevertheless, these accounts generally have the following four problems. First, the semantic relations they proposed or adopted tend to be not specific enough. Levi (1978) , for instance, proposed nine semantic relations between N1 and N2, which she called Recoverably Deleted Predicates (RDP), including CAUSE, HAVE, MAKE, USE, BE, IN, FOR, FROM, and ABOUT. These RDPs, however, appear to be too general to be informative, especially with prepositional ones like IN and FOR, as NNCs linked by the same preposition belong to the same semantic categories only in a very broad sense.", "cite_spans": [ { "start": 160, "end": 171, "text": "Levi (1978)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Second, some of the studies resolve only limited or sporadic semantic categories, while others are questionable in terms of their correct prediction rate. For example, the fourteen semantic relations Li and Thompson (1981) proposed do not seem to make up a meaningful and discrete inventory of semantic relations, while Huang's (2008) combinational patterns for three major categories of physical objects (i.e. animals, plants, and artifacts) are each based on the analysis of only six morphemes, raising concerns about generality.", "cite_spans": [ { "start": 200, "end": 222, "text": "Li and Thompson (1981)", "ref_id": null }, { "start": 320, "end": 334, "text": "Huang's (2008)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The third problem is that the classifying criteria mostly are left unaccounted for; thus, they appear arbitrary. For example, Levi (1978) sees the two components of lemon peel and apple seed as linked by the predicates HAVE and FROM, respectively, but such a distinction A Semantic-Based Approach to between the two NNCs may not be without controversy.", "cite_spans": [ { "start": 126, "end": 137, "text": "Levi (1978)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The last problem is that bridging does not seem to be eventive or by prepositions in the following three situations: first, the host-attribute-value relation (e.g. \u9435\u684c tie-zhuo 'iron table/desk,' \u8eca\u901f che-su 'car speed') with two special subclasses, where N1 denotes time (e.g. \u79cb \u87f9 qiu-xie 'autumn crab') or N1 denotes space (e.g. \u502b \u6566 \u5730 \u9435 Lundun-ditie 'London Underground'); second, meronymy, or part-whole relation (part-whole: e.g. \u96d9\u5e95\u8239 shuang-di chuan 'double-bottom,'; whole-part: e.g. \u8173\u8e0f\u8eca\u8f2a\u80ce jiao-ta-che luntai 'bicycle tire'); and third, conjunction (e.g. \u9418\u9336 zhong-biao 'clock and watch,' \u79ae\u6a02 li-yue 'manners and music') .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Before we go on, we need to explain the definition of Chinese NNCs adopted in this study. Unlike in English, formal similarity in Chinese does not entail a shared POS. For example, the first component in \u5e0c\u81d8\u570b\u6b4c xila guo-ge 'the national anthem of Greece,' \u5e0c\u81d8 \u83dc xila-cai 'Greek dish,' and \u6708\u8cbb yue-fei 'monthly fee' corresponds to adjective forms in their English equivalents. Nevertheless, we include these various forms in our analysis since such formal differences do not reflect conceptual differences, as Levi (1978) has argued for this at length and also included adjectives in her analysis of what she called \"complex nominal,\" or \"NNCs\" in our terms.", "cite_spans": [ { "start": 505, "end": 516, "text": "Levi (1978)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Addressing the aforementioned four problems, we used a knowledge base that we believe could help decide the precise semantic relations for both event-linked and non-event-linked NNCs, which is FrameNet (https://framenet.icsi.berkeley.edu/fndrupal/). In essence, the theory behind FrameNet is that lexical units (LU) evoke concepts represented by \"frames,\" which are each composed of a set of frame elements (FE), i.e. the overtly-realized semantic roles assigned by the frame's LUs. Some LUs evoke entity concepts, while others evoke eventive ones. Since many entities in FrameNet have frames, we think it might be possible to map more NNC-productive N2s in our database, along with the NNCs they derive, to corresponding entity frames in FrameNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We have two research questions. First, with a corpus and FrameNet, we investigate whether there are only limited bridging verbs and semantic relations between the two component nouns of a NNC. Second, are there semantic relations between N1 and N2 that do not involve bridging events?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "As mentioned in the Introduction, many researchers hold that an NNC's component nouns are the arguments of an event that bridges them and by which they are assigned semantic roles. Levi (1978) Levi says NNCs are all linked by one of the nine predicates, with the two components being their arguments; however, we believe that some NNCs simply involve more static relations and some relations are not covered by the above nine predicates. One instance that involves a missing static relation is, for example, the highly-productive shape relation, e.g. dragon boat (\u9f8d\u821f long-zhou). In the following sections, we will use evidence of both language instinct and FrameNet data to support the distinction between simple and complex relations.", "cite_spans": [ { "start": 181, "end": 192, "text": "Levi (1978)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Complex Relations", "sec_num": "2." }, { "text": "Besides event-bridging relations, we propose simple relations, where N1 and N2 are not interacting participants of an event. Despite their shared syntactic and semantic properties, instances of simple relations have not been recognized as a distinct category, as observed by Liu (2008) and by Chung and Chen (2010) .", "cite_spans": [ { "start": 275, "end": 285, "text": "Liu (2008)", "ref_id": "BIBREF40" }, { "start": 293, "end": 314, "text": "Chung and Chen (2010)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "We identified three types of simple relations, as opposed to complex ones:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "(1) N1 and N2 denote two of the three elements of a host-attribute-value set In (1a), the N1 usually denotes the value of the semantic role \"time\" of an event related to the N2. In \u5348\u591c\u5217\u8eca wuyie-lieche 'midnight train' and \u79cb\u87f9 qiu-xie 'autumn crab,' the temporal values are \u5348\u591c wuyie 'midnight' and \u79cb qiu 'autumn,' respectively. The two NNCs either can be elaborated to mean 'trains that travel at midnight' and 'crabs that reach maturity in autumn,' or can be simply put as 'trains at midnight' and 'crabs in autumn,' omitting the events. In (1b), locational N1s usually denote place names. (1a) and (1b) are similar in that understanding of the NNCs does not depend on figuring out the bridging events that decide the semantic roles of the component nouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "It should be noted, however, that the nature of the event that takes place in the time or space denoted by N1 can be less than straightforward. Sometimes, this indeterminacy is caused by the meaning shift of individual components. Take \u79cb\u8475 qiu-kui 'okra' for example. Even native speakers may have no idea what happens to the N2 '\u8475' kui in autumn (i.e. \u79cb qiu 'autumn'). This is because \u8475 kui may not be as familiar a vegetable to modern people as it was when the compound was coined. Sometimes, meaning extension allows multiple readings of a word. For example, in antiquity, when international travel was essentially impossible, \u5e0c\u81d8\u4eba xila-ren 'Greeks' usually lived and stayed in Greece, but nowadays \u5e0c\u81d8 \u4eba xila-ren 'Greeks' and \u5e0c\u81d8\u83dc xila-cai 'Greek dishes' can reach far beyond the national borders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "Nevertheless, while the bridging event can be obscure or diverse, NNCs with temporal or locational N1s share one common characteristic: Some bridging event(s) exists, but it does not have to be clearly identified to enable sufficient understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "Finally, (1c) consists of host-attribute-value relations other than time and space. As argued by Chung and Chen (2010) in line with Liu (2008) , objects and events are characterized by the attributes they have, and attributes are characterized in turn by values. For the examples in (1c), the morphemes \u5f0f shi 'style,' \u50f9 jia 'price,' and \u901f su 'speed' are attributes and \u9435 tie 'iron' and \u6cd5 fa 'French' are attribute-values of material and style, respectively. In other words, both objects and events (collectively called \"hosts\") generally are associated with some attributes and attributes are associated with values. For example, artifacts, which are a subclass of objects, have the attribute \"material,\" and \"iron\" is a kind (value) of material.", "cite_spans": [ { "start": 97, "end": 118, "text": "Chung and Chen (2010)", "ref_id": "BIBREF35" }, { "start": 132, "end": 142, "text": "Liu (2008)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "Given that N1 usually specifies N2, it is natural for value and host, value and attribute, and host and attribute to form NNCs in order to modify the host and attribute or to name the relevant host of an attribute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "As for (2), in \u96d9\u5e95\u8239 shuang-di chuan 'double-bottom' and \u8173\u8e0f\u8eca\u8f2a\u80ce jiao-ta-che luntai 'bicycle tire,' N1 and N2 are not interacting participants of an event. Likewise, in 3 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Simple Relations", "sec_num": "3." }, { "text": "We chose NNC-productive N2s (i.e. those that form NNCs with various types of N1s) from our Prefix-Suffix Database (http://140.109.19.103/affix/), sorted them according to their semantic categories and the situations their derived NNCs described, and matched these situations with FrameNet's frames.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping NNCs to FrameNet's Frames", "sec_num": "4." }, { "text": "To the extent that frames represent concepts, to map NNCs to frames is to identify the concepts NNCs convey. The corporal data to date have indicated that N2s of nine semantic categories are NNC-productive. They are: \"people,\" \"people of different vocations,\" \"food,\" \"clothing,\" \"container,\" \"vehicle,\" \"wealth,\" \"text,\" and \"road.\" We have listed the most common relations between N1 and N2 for each category at the appendix. These categories each can be mapped to one or more entity frames, where the N2 is represented by an FE that usually has the same name as the frame itself and the N1 by another FE of the frame. The above mappings show that NNCs that involve simple (as well as complex) relations correspond to FE pairs in FrameNet's entity frames. Take \u7389\u7c73\u9905 yumi-bing 'corn cake' for example. The NNC can be mapped to FOOD, with the N2 \u9905 bing 'cake' denoting the FE \"Food\" and the N1 \u7389\u7c73 yumi 'corn' denoting \"Material,\" which is another FE of the frame.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping NNCs to FrameNet's Frames", "sec_num": "4." }, { "text": "For NNCs of complex relations, besides an entity frame, the N1 usually can be mapped to another event frame, a point we will return to in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping NNCs to FrameNet's Frames", "sec_num": "4." }, { "text": "We have two findings attested to by the behavioral patterns of the nine semantic categories of N2s and their derived NNCs. First, NNCs generated by N2s of the same semantic category mostly correspond to one or a few conceptually-related frames. Second, some of the relations mapped are simple and some are complex, with N2 categories varying in their tendencies to denote simple and complex semantic relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5." }, { "text": "We noticed that, when N1 and N2 are bridged by events, they usually can be mapped to both an entity frame and one or more event frames. We also found that common bridging events that link N1s to a N2 for each semantic category of N2 are limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapped to Entity Frames, Bridged by a Few Events, and Involving Limited Semantic Relations", "sec_num": "5.1" }, { "text": "For example, some of the NNCs the N2 category \"money\" derives include \u4e2d\u8cc7 zhong-zi 'China capital,' \u8eca\u6b3e che-kuan 'money for buying a car,' and \u6240\u8cbb suo-fei 'institute fund,' which we identified to belong to the entity frame \"MONEY,\" where the N1s in the above three examples can be mapped to the FE \"Use\" and the N2 to \"Money.\" Meanwhile, we found these N1s labeled as FEs in at least two event frames, which are \"COMMERCE_BUY\" and \"COMMERCE_SELL, where the three N1s \u4e2d zhong 'China,' \u8eca che 'car,' and \u6240 suo 'institute,' correspond to the FEs Buyer, Goods, and Seller, respectively, are all core FEs of the event frames. Since the range of LUs and FEs for each frame usually is limited, the range of possible interpretations is more or less restricted for each NNC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapped to Entity Frames, Bridged by a Few Events, and Involving Limited Semantic Relations", "sec_num": "5.1" }, { "text": "Below are all of the LUs and some of the FEs of these two event frames. (Not all non-core FEs are listed.) COMMERCE_SELL LUs: auction.n, auction.v, retail.v, retailer.n, sale.n, sell.v, vend.v, vendor.n Core FEs: Buyer, Goods, Seller Non-core FEs (not exhaustively listed): Manner, Means, Money, Rate, Unit, etc.", "cite_spans": [ { "start": 121, "end": 202, "text": "LUs: auction.n, auction.v, retail.v, retailer.n, sale.n, sell.v, vend.v, vendor.n", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Mapped to Entity Frames, Bridged by a Few Events, and Involving Limited Semantic Relations", "sec_num": "5.1" }, { "text": "Most of the N2 categories we have analyzed so far have produced both simple and complex-type NNCs. Below are two entity frames, CLOTHING and VEHICLE, which correspond to the N2 categories \"clothing\" and \"vehicle\". Each frame has at least one simple and one complex relation, which differ in frequency. The simple ones are labeled with their subclasses (and FEs 2 ); the complex ones are labeled with the relevant FEs, which refer to the FEs that occur most or second-most often. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N2 Categories Vary in Tendencies to Involve Simple and Complex Relations", "sec_num": "5.2" }, { "text": "As shown in Table 1 , the average coverage of the semantic relations that FrameNet and E-HowNet have is 94.2% for the 1,153 compositional NNCs in the Prefix-Suffix Database.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "The Coverage of the Identified Semantic Relations", "sec_num": "5.3" }, { "text": "Below is the individual coverage of each N2 category. Table 2 shows the average coverage of the three and five most frequent semantic relations. For the mapped percentage of each fine-grained relation for the nine categories, please refer to Appendix B.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 2", "ref_id": "TABREF17" } ], "eq_spans": [], "section": "The Coverage of the Identified Semantic Relations", "sec_num": "5.3" }, { "text": "We found that the top three most frequent semantic relations account for about eighty percent of the NNC instances. Meanwhile, the five most frequent relations on average have about 8% better coverage than the top three. Nevertheless, we noticed individual differences among N2's categories, with \"food\" and \"vehicle\" having a much lower coverage than others. Also, although we considered compositional NNCs only, there are still some relations that we lack labels for in FrameNet and E-HowNet. Some of these instances include metaphors, e.g. \u91ce \u96de \u8eca yie-ji che 'unlicensed car,' \u9738\u738b\u8eca bawang-che 'unpaid ride'; apposition, e.g. \u9152\u5427\u8eca jiuba-che 'bar van,' \u888d\u670d pao-fu, 'robe,' \u9776\u8239 ba-chuan 'target ship'; and those whose N1 indicates a general \"use\" relation unlike the other fine-grained mappings, e.g. \u5546\u8f2a shang-lun 'merchant vessel,' \u4ea4\u901a\u8239 jiaotung-chuan 'commuter ship.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Coverage of the Identified Semantic Relations", "sec_num": "5.3" }, { "text": "In this section, we will relate the two findings to our two research questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6." }, { "text": "In the nine categories we investigated, the NNCs' bridging verbs, as well as the possible semantic roles that N1s and N2s take, are very limited, with an average coverage of over ninety percent. Even the least covered category \"food\" has 69.8% of its instances accounted for.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First, are there only limited bridging verbs and semantic relations between the two component nouns?", "sec_num": null }, { "text": "These findings support previous studies proposing that N1 and N2 often are bridged by events (Levi, 1978; Leonard, 1984; Ryder, 1994) , that bridging events are limited (Levi, 1978; Copestake & Lascarides, 1997; Copestake & Briscoe, 2005) , and that the semantic relations are limited as well (S\u00d8gaard, 2005; Huang, 2008) .", "cite_spans": [ { "start": 93, "end": 105, "text": "(Levi, 1978;", "ref_id": "BIBREF39" }, { "start": 106, "end": 120, "text": "Leonard, 1984;", "ref_id": "BIBREF38" }, { "start": 121, "end": 133, "text": "Ryder, 1994)", "ref_id": "BIBREF41" }, { "start": 169, "end": 181, "text": "(Levi, 1978;", "ref_id": "BIBREF39" }, { "start": 182, "end": 211, "text": "Copestake & Lascarides, 1997;", "ref_id": null }, { "start": 212, "end": 238, "text": "Copestake & Briscoe, 2005)", "ref_id": null }, { "start": 293, "end": 308, "text": "(S\u00d8gaard, 2005;", "ref_id": "BIBREF42" }, { "start": 309, "end": 321, "text": "Huang, 2008)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "First, are there only limited bridging verbs and semantic relations between the two component nouns?", "sec_num": null }, { "text": "To understand NNCs of simple relations does not require the identification of what one component entity does to the other. FrameNet data also suggest that bridging events sometimes are absent. We say this because we found that, among the NNCs that a N2 derives, N1s that involve complex relations usually can be mapped to FEs in eventive frames that the bridging event represents, while those that involve simple ones do not. For example, although \"Material\" is a productive static FE in the entity frame CLOTHING, it is not among the FEs of the eventive frame DRESSING, which describes the process and state of putting and having clothes on. In contrast, \"Wearer\" and \"Body_location,\" which are also FEs of CLOTHING but involve complex relations, also assume FEs in DRESSING as arguments of LUs like \"dress-up\" and \"put-on.\" Such distributional differences of FEs mean that the N1s represented by them are also distributed differently, resulting in NNCs contrasting in simple and complex terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second, are there semantic relations between N1 and N2 that do not involve bridging events?", "sec_num": null }, { "text": "While the simple-complex distinction also is attested to in a corpus-based framework like FrameNet, it seems that it is not recognized as a distinct class in Levi's widely-adopted system. While it appears that Levi (1978) considers some simple NNCs under the predicate HAVE, the status of other simple NNCs is unclear. For example, imperial bearing is classified as an instance of HAVE and paraphrased as 'have the bearing of an emperor.' Nevertheless, it seems that HAVE does not cover all the simple relations, as she defines the predicate as roughly corresponding to the semantic roles of \"productive,\" \"constitutive,\" and \"compositional,\" which do not exhaust all simple relations. Moreover, some simple instances fall under her other predicates. For example, apple seed is considered an instance of FROM. We think FrameNet as a mapping means helps sort simple NNCs under semantic relations like Levi's predicates.", "cite_spans": [ { "start": 210, "end": 221, "text": "Levi (1978)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Second, are there semantic relations between N1 and N2 that do not involve bridging events?", "sec_num": null }, { "text": "With regards to implementation, the findings indicate that simple and complex NNCs should be processed differently. For simple NNCs, host-attribute-value sets, place names, temporal expressions, and conjunction pairs to some degree can be exhaustively listed, as we have done in our knowledge base, Extended-HowNet, reducing identification of simple relations to table-checking. The E-HowNet taxonomy can also detect meronymy relations. For complex NNCs, the inventory of LUs and their argument FEs in FrameNet's frames narrows down the possible interpretations of NNCs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second, are there semantic relations between N1 and N2 that do not involve bridging events?", "sec_num": null }, { "text": "We believe such mappings can complement the inadequacies of frameworks like Levi's (1979) . First, the designation of FrameNet makes NNCs' readings more specific, as frames use fine-grained FEs and LUs are real words. Similarly, classification can be FE-based. For example, lemon peels and apple seed both belonging to the FE pair Whole-Part can be a reason for them to be grouped under the same predicate; for example, HAVE. Another classifying criterion is the simple-complex distinction. For example, to analyze the example in a different way, NNCs of the HAVE type can be defined as being made up of FE pairs like Whole-Part or Part-Whole and belonging to the simple type. Along the same vein, her IN category may involve NNCs with N1s of the FEs Time and Location, which in turn define the simple subclass of time and space. Finally, since frames are motivated by semantic and syntactic differences between words, they are expected to grow in coverage with more words' behaviors analyzed and new frames annotated.", "cite_spans": [ { "start": 76, "end": 89, "text": "Levi's (1979)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Second, are there semantic relations between N1 and N2 that do not involve bridging events?", "sec_num": null }, { "text": "The current study shares the insights with previous researchers that NNCs usually describe a limited range of situations and that the meaning of an NNC is compositional, while putting forth the idea that the range of semantic relations for event-bridging NNCs usually is clustered around the head, i.e. N2. We attained such findings by mapping the situations sorted by N2's semantic categories to frames from FrameNet, which is based on corpus-attested thematic patterns. We also noted that N1 and N2 sometimes are bridged in non-eventive ways. Both eventive and non-eventive cases can be interpreted through mapping to resources like FrameNet and E-HowNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Appendix B: The mapped percentage of each N1 semantic role for the nine categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "(\"Others\" refers to instances we could not map with existing semantic role labels from FrameNet and E-HowNet.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Road Text ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N2 category", "sec_num": null }, { "text": "In recent years, Mandarin text-to-speech synthesis systems have been proposed and have achieved satisfactory performance (Ling, 2012; . These systems are able to synthesize fluent and natural speech, even with personal characteristics (Huang, 2013) . Recently, singing voice synthesis has been one of the emerging and popular research topics. Such systems enable computers to sing any song.", "cite_spans": [ { "start": 121, "end": 133, "text": "(Ling, 2012;", "ref_id": "BIBREF53" }, { "start": 235, "end": 248, "text": "(Huang, 2013)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two main methods in the research on corpus-based singing voice synthesis. The first one is the sample-based approach. The principle of this method is to use a large database \uf02a Department of Computer Science and Information Engineering, National Cheng Kung University, Taiwan E-mail: { carrie771221; ychin.huang; chunghsienwu}@gmail.com of recordings of singing voices that are further segmented into units. In the synthesis phase, based on a given score with the lyrics, the system then searches and selects appropriate sub-word units for concatenation. VOCALOID (Kenmochi, 2007) is such a singing voice synthesizer that enables the user to input lyrics and the corresponding melody. Given the score information, the system selects the necessary samples from the Singer Library and concatenates them to produce the synthesized singing voice. Finally, the system performs pitch conversion and timbre manipulation to generate smoothed concatenated samples. The software was originally only available in English and Japanese, but VOCALOID 3 has added support for Spanish, Chinese, and Korean. A Mandarin singing voice system using a unit selection method was proposed in (Zhou, 2008) . Singing units in this method are chosen from a singing voice corpus with the lyrics of the song and the musical score information embedded in a MIDI file. To improve the synthesis quality, synthesis unit selection and the prosody and amplitude modification are applied. This system uses a Hanning window to smooth instances where speech segments were concatenated. Although the unit selection method is able to synthesize high quality speech at the waveform level, the concatenation-based methods suffer from the discontinuity problem at the boundaries between concatenated units. As different samples that make up the singing voice are recorded in different pitches and phonemes, discontinuity might exist in the resulting singing voice.", "cite_spans": [ { "start": 573, "end": 589, "text": "(Kenmochi, 2007)", "ref_id": "BIBREF49" }, { "start": 1178, "end": 1190, "text": "(Zhou, 2008)", "ref_id": "BIBREF62" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The other method is the statistical approaches, where hidden Markov models (HMMs) (Oura, 2010; Saino, 2006) are the most widely used. Acoustic parameters are extracted from a singing voice database and modeled by the context-dependent HMMs. The acoustic parameters are generated by the concatenated HMM sequence. Finally, vocoded waveforms of the singing voice are generated from the inverse filter of the acoustic parameters. Sinsy (Oura, 2010 ) is a free-online HMM-based singing voice synthesis system that provides Japanese and English singing voices. Users can obtain synthesized singing voices by uploading musical scores. Synthesizing singing voices based on HMMs sound blurred due to the limitation of the current vocoding technique. Nevertheless, it can generate a smooth and stable singing voice, and its voice characteristics can be modified easily by transforming the parameters appropriately.", "cite_spans": [ { "start": 82, "end": 94, "text": "(Oura, 2010;", "ref_id": "BIBREF54" }, { "start": 95, "end": 107, "text": "Saino, 2006)", "ref_id": "BIBREF55" }, { "start": 433, "end": 444, "text": "(Oura, 2010", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In addition to the concatenation-based method and statistical method, there are also some other methods proposed to generate a Mandarin singing voice, e.g., Harmonic plus Noise Model (HNM) (Gu, 2008) , which adopted HNM parameters of a source syllable to synthesize singing syllables of diverse pitches and durations. This method can generate singing voices with good quality. Nevertheless, the discontinuity problem occurring at the concatenation points is still a major problem. Speech-to-singing method (Saitou, 2007) is another approach. Instead of synthesizing from a singing database, the speech-to-singing method converts speech into a singing voice by a parameter control model. Similarly, text-to-singing (lyrics-to-singing) Using Tailored Synthesis Units and Question Sets synthesis (Li, 2011) is used to generate synthesized speech of input lyrics by a TTS system followed by a melody control model that converts speech signals into singing voices by modifying the acoustic parameters. These two methods are based mainly on conversion rules that could be patchy.", "cite_spans": [ { "start": 189, "end": 199, "text": "(Gu, 2008)", "ref_id": "BIBREF43" }, { "start": 506, "end": 520, "text": "(Saitou, 2007)", "ref_id": "BIBREF56" }, { "start": 793, "end": 803, "text": "(Li, 2011)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Research on speech and singing synthesis has been closely linked, but there are important differences between the two methods with respect to the generated voices. The major parts of singing voices are voiced segments, whereas speech consists of a relatively large percentage of unvoiced sounds (Kim, 2003) . Besides, fluency and continuity in singing voices are very important properties. In order to synthesize a smooth and continuous singing voice, an HMM-based synthesis approach is adopted in this study to build our singing voice synthesis system. To the best of our knowledge, the currently available HMM-based singing voice synthesis systems have not been applied to the Mandarin singing voice. By carefully defining and tailoring the synthesis units and the question set, a Mandarin singing voice synthesis system based on HMM-based framework has been constructed successfully in this study.", "cite_spans": [ { "start": 295, "end": 306, "text": "(Kim, 2003)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is organized as follows. The proposed HMM-based singing voice synthesis system is introduced in Section 2. Section 3 consists of subjective and objective evaluations of the proposed system, compared to the original HMM-based singing voice synthesis system. Concluding remarks and future work are given in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In recent years, the number of studies on HMM-based speech synthesis has grown. Some research has made progress on prosody improvement (Hsia, 2010; Huang, 2012) to obtain more natural speech. Recently, an HMM-based method has been applied to singing voice synthesis (Saino, 2006) . There are more combinations of contextual factors in singing voice synthesis than that in speech synthesis. Applying a unit selection method to singing voice synthesis is quite difficult, because it needs a huge number of singing voices. On the contrary, an HMM-based system can be constructed using a relatively small amount of training data. As a result, the HMM-based approach is easier for constructing a singing voice synthesizer.", "cite_spans": [ { "start": 135, "end": 147, "text": "(Hsia, 2010;", "ref_id": "BIBREF44" }, { "start": 148, "end": 160, "text": "Huang, 2012)", "ref_id": "BIBREF46" }, { "start": 266, "end": 279, "text": "(Saino, 2006)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Mandarin Singing Voice Synthesis System", "sec_num": "2." }, { "text": "The system proposed in this study is based on the HMM-based approach that was developed by the HTS working group . The proposed structure of the singing synthesis system based on HMM is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 195, "end": 203, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Proposed Mandarin Singing Voice Synthesis System", "sec_num": "2." }, { "text": "In the training phase of the proposed system, excitation, spectral, and aperiodic parameters are extracted from a singing voice database. Lyrics and notes of the songs in the singing corpus are considered as contextual information for generating context-dependent label sequences. Then, the sequences are split and clustered with context-dependent question sets and the context-dependent HMM models are trained based on the clustered phone segments. In the synthesis phase, a musical score and the lyrics to be synthesized also are converted into a context-dependent label sequence. Based on the label sequence, a sequence of parameters, consisting of excitation, spectral, and aperiodic parameters, corresponding to the given song is obtained from the concatenated context-dependent HMMs. Finally, the obtained parameter sequences are synthesized to generate the singing voice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. Structure of the HMM-based Singing Voice Synthesis System", "sec_num": null }, { "text": "Singing is the act of producing musical sounds with one's voice, and one main difference between a singing voice and speech is the use of the tonality and rhythm of a song. Therefore, the contextual factors should consist of not only linguistic information but also note information. In addition, the cue information obtains the actual timing of each phone in the singing data. The details of the model definition are described in the following section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Definition", "sec_num": "2.1" }, { "text": "Using Tailored Synthesis Units and Question Sets", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based Mandarin Singing Voice Synthesis 67", "sec_num": null }, { "text": "In the HMM-based Mandarin speech synthesis, \"segmental Tonal Phone Model, STPM\" (Huang, 2004) is often adopted to define the HMM-based phone models. Only a relatively small number of phone models are defined to characterize all Mandarin tonal syllables. Furthermore, in order to represent the five lexical tones for Mandarin syllables, each Mandarin syllable is defined to consist of three parts, based on phonology (Lin, 1992) , as: C+V1+V2. In the phonological structure, C denotes the first extended initial phone and the following units (V1 and V2) are tonal final phones. Tonal final phone conveys tonal information using the extended tone notations, such as H (high), M (middle), and L (low), i.e., Tone 1: H+H, Tone 2: L+H, Tone 3: L+L, Tone 4: H+L, and Tone 5: M+M).", "cite_spans": [ { "start": 80, "end": 93, "text": "(Huang, 2004)", "ref_id": "BIBREF47" }, { "start": 416, "end": 427, "text": "(Lin, 1992)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Information", "sec_num": "2.1.1" }, { "text": "Although STPM can describe all pitch patterns in Mandarin speech, pitch patterns in singing voices are quite different from read speech. Figure 2 shows the pitch contours (blue lines) of the read speech and singing voice of the same sentence produced by the same person. As the figure shows, the pitch contour of the read sentence is controlled by the tone of each syllable. In contrast, the pitch contour of a singing sentence is relatively flat and corresponds to the musical notes of the corresponding syllables. The musical note is more of a requirement than the tones of the syllables for the pitch contour in singing voice. Therefore, the definition of each syllable for a singing voice is redefined as C+V, where C is still the extended initial sub-syllable and V is the final sub-syllable without tonal information. Rhythm is one major difference between read speech and a singing voice. Vowels usually convey the rhythm of a singing voice since the vocal tract remains open while uttering a vowel, allowing the resonance frequencies of the vocal tract to remain stable. Because of these characteristics, vowels are probably one of the most important factors to represent a good singing voice. A Mandarin syllable consists of two parts: initial and final. The initial part is optional and is composed of consonants. The final part, namely vowels, includes medial and rime. The medial is located between the initial and the rime. The medial phonologically is connected with the rime rather than the initial. So, in the definition of singing sub-syllable models, each medial is combined with a final. Furthermore, the combination of medial and rime is collectively known as a final, and some examples are listed in Table 1 . For the singing model definition, the phonetic annotation is based on the Hanyu Pinyin. Note that the tone information (Arabic numbers) of the original tonal syllable is ignored for the initial or final models in the singing sub-syllable definition. Besides, we define the final models with medial as separate models to ensure that each vowel can have a specific model representing this property.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 145, "text": "Figure 2", "ref_id": "FIGREF12" }, { "start": 1721, "end": 1728, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Linguistic Information", "sec_num": "2.1.1" }, { "text": "Tonal Syllable", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. Examples of finals with medial", "sec_num": null }, { "text": "C V \u3109\u4e00\u3120\u02cb diau4 d iau \u310c\u3128\u311b\u02c7 luo3 l uo \u3112\u3129\u311d\u02ca shiueh2 sh iueh", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. Examples of finals with medial", "sec_num": null }, { "text": "The syllable with only an initial is generally followed by an empty rime \"\u5e00\". The empty rime does not have word phonetic annotation. In order to represent this property, we define a phoneme \"zr\" as the empty rime of the retroflex, which is connected only to the retroflex class of initial phonemes. Correspondingly, the phoneme \"sr\" is the empty rime of the alveolar, which is connected only to the alveolar class of initial phonemes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. Examples of finals with medial", "sec_num": null }, { "text": "In general, a long duration note is sung differently from short duration note. For shorter notes, temporal variation is relatively small and stable. Nevertheless, temporal variation of a longer note is much larger and unstable. Lengthening a syllable with a short duration note cannot precisely represent the expression of syllable with long duration note. So, when the word corresponds to a half note or above, the finals followed by an \"L\" are defined to denote the long duration model. According to the above rules, 95 Mandarin signing sub-syllables are obtained according to the definition for singing voice. There are 21 initial sub-syllables, 18 final sub-syllables (2 of finals are empty final phonemes), 20 medials combined with final sub-syllables, and 36 long duration sub-syllables. In addition to the 95 signing sub-syllables, the silence and pause models are further included. Silence is an unvoiced segment in the beginning and the end of a song. Pause is an unvoiced segment in the middle of a song.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. Examples of finals with medial", "sec_num": null }, { "text": "In addition to lyrical information, note information is one of the vital factors for singing voice synthesis. Contextual factors of note information consist of three categories to fully describe singing characteristics, including pitch and duration of the note and the song structure. Note pitch refers to the melody of a song and determines if the song sounds great or not. In this", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Note Information", "sec_num": "2.1.2" }, { "text": "Using Tailored Synthesis Units and Question Sets category, absolute pitch, relative pitch, pitch difference between previous and current notes, and pitch difference between current and next notes are included. Duration is the length of a note and is one of the bases of rhythm. In this category, the length of the note can be expressed by three kinds of standards. Song structure means which overall musical form or structure the song adopted and the order of the musical score. Different note positions in the measure or phrase may have different expressions due to breathing. In this category, beat, tempo, key of the song, and position of each note are included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based Mandarin Singing Voice Synthesis 69", "sec_num": null }, { "text": "Cue information considered in the contextual factors consists of the timing and the length of a sub-syllable. We manually segment all of the songs at the sub-syllable level. The timing information of a sub-syllable measured based on a time interval of 0.1 seconds will be converted into the absolute length of the note. The position of note identity in the measure or phrase is also converted according to the cue information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cue Information", "sec_num": "2.1.3" }, { "text": "Based on unit definition and contextual factors, we define five categories for the questions in the question set. The five categories of the question set are sub-syllable, syllable, phrase, song, and note. The details of the question set are described as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Set for Decision Trees", "sec_num": "2.2" }, { "text": "(1) Sub-syllable: (current sub-syllable, preceding one and two sub-syllables, and succeeding one and two sub-syllables) Initial/final, final with medial, long model, articulation category of the initial, and pronunciation category of the final", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Set for Decision Trees", "sec_num": "2.2" }, { "text": "(2) Syllable: The number of sub-syllables in a syllable and the position of the syllable in the note", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Set for Decision Trees", "sec_num": "2.2" }, { "text": "(3) Phrase: The number of sub-syllables/syllables in a phrase (4) Song: Average number of sub-syllables/syllables in each measure of the song and the number of phrases in this song (5) Note: The absolute/relative pitch of the note; the key, beat, and tempo of the note; the length of the note by syllable/0.1 second/thirty-second note; the position of the current note in the current measure by syllable/0.1 second/ thirty-second note; and the position of the current note in the current phrase syllable/0.1 second/thirty-second note", "cite_spans": [ { "start": 181, "end": 184, "text": "(5)", "ref_id": "BIBREF121" } ], "ref_spans": [], "eq_spans": [], "section": "Question Set for Decision Trees", "sec_num": "2.2" }, { "text": "There are 5364 different questions defined in the question set. The HMMs for the baseline Mandarin singing voice synthesis system were trained based on the entire question set, and the resulting clustered HMMs are shown in Table 2 and Table 3 . As shown in these tables, the number of leaf nodes in the tree clustered using fundamental frequency (F0) is 3951. The number of each state for the clustered F0 models is shown in Table 2 . The most frequently used questions for every clustered tree of each state were sub-syllable types, position of note in measure or phrase, and phrase level.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 242, "text": "Table 2 and Table 3", "ref_id": "TABREF6" }, { "start": 425, "end": 432, "text": "Table 2", "ref_id": "TABREF17" } ], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.3" }, { "text": "The number of leaf nodes in the trees for mel-cepstral coefficients (mgc) is 2844. The number of the leaf nodes in each state is shown in Table 3 . The most frequently used questions are the same as the results for F0. ", "cite_spans": [], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Baseline Model", "sec_num": "2.3" }, { "text": "The baseline of singing voice system can synthesize arbitrary songs, but it still has a lot of room to improve. The approaches we implemented to refine our system include question set modification, singing voice database extension using pitch-shift pseudo data, and vibrato creation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Refinement", "sec_num": "2.4" }, { "text": "Pitch is highly related to the notes and the sounds we hear when someone is singing. The quality of the song strongly depends on the accurate pitch of all notes produced by the singer. The quality of the HMM-based synthesized singing voices depends strongly on the training data, owing to its statistical nature. Therefore, the singing database should cover the pitch range of the notes in the song. Using the pitch-shift pseudo data, it is helpful to cover the missing pitch of sub-syllables and increase the size of the training data. We examine whether all Mandarin sub-syllables we defined cover the whole pitch range (C4~B4) or not. Since shifting too much frequency of a note will change the timbre, the missing pitches of sub-syllables could be obtained using the nearby notes from other songs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pitch-Shift Pseudo Data", "sec_num": "2.4.1" }, { "text": "The parameters generated from the clustered HMMs are highly correlated to the speech quality of the synthesized singing voice. A large number of contextual factors are not suitable when the size of the training data is not large enough to be clustered by various contextual", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Set Modification", "sec_num": "2.4.2" }, { "text": "Using Tailored Synthesis Units and Question Sets factors, and this may cause a data sparseness problem. The selection of the question set is crucial for generating proper models. In the baseline system, the most frequently used questions in the trees for F0 and mel-cepstral coefficients are sub-syllables types, position of note, and phrase level. Nevertheless, our singing database is not large enough to obtain every contextual factor. Thus, the question set should be tailored to remove some unsuitable questions. The removed questions consist of three types, including duplicate questions, indirect questions, and relative questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based Mandarin Singing Voice Synthesis 71", "sec_num": null }, { "text": "Duplicate questions refer to times when the note length can be represented by two types of units, 0.1 second and thirty-second note. Although 0.1 second is an absolute length and thirty-second note is the relative pitch of the recorded waveform, both units describe the same information. So, we delete the note length question with 0.1 second. Indirect question means the questions at the level of phrase and song, which are called paralinguistic information. These questions do not directly represent the information of one note, because they are mainly about how many sub-syllables and syllables there are in phrases and the average numbers of sub-syllables and syllables in each measure of the songs. The essential information of a note is its pitch and length, so the questions about position of note are also indirect questions. The paralinguistic information, however, could be useful when the size of corpus is large. Every song has different keys, so the standard of the relative pitch also is different. Two notes with the same relative pitch may have different absolute pitch values. Therefore, we delete the question sets related to relative pitch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based Mandarin Singing Voice Synthesis 71", "sec_num": null }, { "text": "Furthermore, we modify the absolute pitch questions by keeping the questions with absolute answers and remove the questions with comparative answers. Thus, we can ensure that the leaf node that is divided by the absolute pitch questions can be clustered with the same absolute pitch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HMM-based Mandarin Singing Voice Synthesis 71", "sec_num": null }, { "text": "Vocal vibrato is a natural oscillation of musical pitch, and singers employ vibrato as an expressive and musically useful aspect of the performance. Adding vibrato can make the synthesized singing voice more natural and expressive. The frequency and the amplitude can be considered since they are the two fundamental parameters affecting the characteristic sound of a vibrato effect. The method to create vibrato is to vary the time delay periodically (Z\u00f6lzer, 2002) , and it uses the principle of Doppler Effect. Our system implemented the vibrato effect by a delay line and a low frequency oscillator (LFO) to vary the delay.", "cite_spans": [ { "start": 452, "end": 466, "text": "(Z\u00f6lzer, 2002)", "ref_id": "BIBREF63" } ], "ref_spans": [], "eq_spans": [], "section": "Vibrato Creation", "sec_num": "2.4.3" }, { "text": "For the construction of the signing voice database, the musical scores from nursery rhymes and children's songs are considered as the candidates. The major selection criterion for choosing the songs is the phonetic coverage for synthesizing universal Mandarin singing voices. The lyrics of the selected songs should cover all of the sub-syllables in Mandarin. A total of 74 songs were selected. Some of the selected songs have two or more versions with the same melody but different lyrics. Considering the variation of pitch and timbre, a female singer who has been participating in a singing contest and is a member of the a cappella team was invited as the signer to provide a stable and natural-sounding signing voice. The singer used the built-in microphone of a MAC notebook for recording. The songs were recorded using Audacity. The environment where the signer recorded was quiet. Noises, including the metronome, were not allowed. Besides, each song has two versions in order to increase the quantity of the database. The singing data with low signal-to-noise ratio or energy exceeding a limit were not included. The amplitude of all singing data was normalized. The overview of this database is summarized in Table 4 . To improve the quality of the database, sub-syllable boundaries and musical scores were manually corrected. ", "cite_spans": [], "ref_spans": [ { "start": 1219, "end": 1226, "text": "Table 4", "ref_id": "TABREF21" } ], "eq_spans": [], "section": "Singing Voice Database", "sec_num": "3.1" }, { "text": "Singing voice signals were sampled at a rate of 48 kHz and windowed by a 25ms Blackman window with a 5ms shift. Then, mel-cepstral coefficients were obtained from the STRAIGHT algorithm (Kawahara, 2006) . The feature vectors consisted of spectrum, excitation, and aperiodic factors. The spectrum parameter vectors consisted of 49-order STRAIGHT Using Tailored Synthesis Units and Question Sets mel-cepstral coefficients, including the zero-th coefficient, their delta, and delta-delta coefficients. The excitation parameter vectors consisted of log F0, its delta, and delta-delta.", "cite_spans": [ { "start": 186, "end": 202, "text": "(Kawahara, 2006)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3.2" }, { "text": "A seven-state (including the beginning and ending null states), left-to-right Hidden Semi-Markov Models (HSMM) was employed, in which the spectral part of the state was modeled by a single diagonal Gaussian output distribution. The excitation stream was modeled with multi-space probability distributions HSMM (MSD-HSMM), each of which consisted of a Gaussian distribution for \"voiced\" frames and a discrete distribution for \"unvoiced\" frames.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3.2" }, { "text": "The term Riffs and runs implies a syllable with multiple notes. In other words, it is a quick articulation of a series of pitches sustained on a single vowel sound. In the proposed method, the generation of riffs and runs repeats the last final in previous words to mimic the singing skill.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3.2" }, { "text": "Furthermore, in the middle of a song, vibrato is combined with the amplitude in 4E-4 millisecond, frequency in 6 Hz, and start timing in 25% of the sub-syllable. At the end of a song, vibrato is combined with the amplitude in 8E-4 millisecond, frequency in 5 Hz, and the start timing is at the position of 50% of the sub-syllable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3.2" }, { "text": "To evaluate the constructed Mandarin singing voice synthesis system, we conducted a subjective listening test. Ten songs not included in the training data were divided into two parts. Therefore, we obtained 20 parts for testing. The testing waveforms generated by different systems were presented to the subjects in a random order. 12 native Mandarin speaking subjects were asked to participate in the evaluation test. Mean Opinion Score and Preference test were used as evaluation measures for the subjective test.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "3.3" }, { "text": "In order to evaluate the effectiveness of the refinements we proposed, four different settings of the synthesis models were used. These models were evaluated on the effect of the refinements, i.e. question set modification and inclusion of pitch-shift pseudo data. The settings and the descriptions are described in Table 5 . Figure 3 shows the Mandarin singing voice synthesis system can generate the F0 patterns similar to the actual F0 patterns of the musical score. Figure 4 shows the Mandarin singing voice synthesis system can generate the pitch contour of the synthesized singing voice with almost the same as the pitch contour of the original singing voice. Nevertheless, some of the singing phenomena, such as overshoot and preparation were smoothed after HMM training. ", "cite_spans": [], "ref_spans": [ { "start": 316, "end": 323, "text": "Table 5", "ref_id": "TABREF22" }, { "start": 326, "end": 334, "text": "Figure 3", "ref_id": "FIGREF13" }, { "start": 470, "end": 478, "text": "Figure 4", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "3.3" }, { "text": "We evaluated the nature of the synthesized singing voice with a long duration model. Figure 5 shows the system with the long duration model has 62% preference, which is higher than 38% for the system without the long duration model. This shows that the long duration model can improve the nature of phones with long duration. Therefore, all of the evaluated systems use the long duration model in the following tests.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Preference Test", "sec_num": "3.3.2" }, { "text": "In addition, we evaluated the nature of the synthesized singing voice with vibrato. The preference result is shown in Figure 6 . The subjects only slightly preferred the synthesized singing voice with vibrato over that without vibrato. The main reasons are that two combinations of parameter settings are insufficient and that different pitches and situations must correspond to different combinations of vibrato parameters. Moreover, vibrato is not essential in children' songs. Subjects preferred simple over skillful singing styles in these kinds of songs. ", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 126, "text": "Figure 6", "ref_id": "FIGREF15" } ], "eq_spans": [], "section": "Figure 5. Result of preference test with long duration model", "sec_num": null }, { "text": "The MOS of quality in four evaluation settings is shown in Figure 7 , and the MOS of intelligibility is shown in Figure 8 .", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 67, "text": "Figure 7", "ref_id": "FIGREF3" }, { "start": 113, "end": 121, "text": "Figure 8", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Mean Opinion Scores (MOS)", "sec_num": "3.3.3" }, { "text": "The results show that the baseline singing voice system has the lowest MOS, because the training data is insufficient for clustering using a large number of questions and because some sub-syllables are not covered in some of the pitch frequencies. After question modification, MOS is 2.43 in quality and 2.74 in intelligibility, which are higher than those for the baseline system. The PS model has MOS of 2.73 in quality and 2.85 in intelligibility, which are higher than those for the baseline system. This shows that adding pitch-shift pseudo data is one of the useful refinements. Finally, the MOS of QM+PS model is 2.95 in quality and 3.05 in intelligibility. These scores are higher than those for the PS model with the modified question HMM-based Mandarin Singing Voice Synthesis 77 Using Tailored Synthesis Units and Question Sets set and the QM model with pitch-shift pseudo data. According to the results, we can conclude that, although all question sets take all of the contextual factors into account, some contextual information might not be included in the corpus, which may cause bad clustering results. By tailoring the question set appropriately, the system can improve the quality and intelligibility of the synthesized singing voice. In addition, adding pitch-shift pseudo data also can improve the quality of the synthesized singing voice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7. MOS of the synthesized singing voice in quality Figure 8. MOS of the synthesized singing voice in intelligibility", "sec_num": null }, { "text": "In this paper, a corpus-based Mandarin singing voice synthesis system based on hidden Markov models (HMMs) was implemented. We defined the Mandarin phone models and the question set for model clustering. Linguistic information and musical information both are modeled in the context-dependent HMM. Furthermore, three methods were employed to refine the constructed system, i.e. question set modification, pitch-shift pseudo data, and vibrato creation. Experimental results show that the proposed system could synthesize a satisfactory singing voice. The performance of the corpus-based synthesis system is highly dependent on the training corpus, and the quality of the corpus can directly affect the synthesized voice quality. The environment for data recording should be professional and silent, such as in an anechoic chamber or using sound-absorbing equipment. Furthermore, the training corpus should be as large as possible to cover all contextual factors. Although our singing database was designed with high phonetic coverage and enhanced by adding pseudo data for better pitch coverage, there are some factors that were not covered, such as the coverage of duration and higher level information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "4." }, { "text": "Besides, a more accurate model is essential for synthesizing a better singing voice. Model clustering should be categorized and labeled with priority, since some factors are more important than others for singing characteristics. The process of clustering decision trees should be guided based on the priority of clustering questions to obtain a more accurate model. The singer's timbre and pronunciation are also important factors that affect synthesized singing voice quality. The nasal tone of a singer's voice might cause acoustic information disappearance when uttering syllables with higher pitches. Unclear utterances also cause the synthesized singing voice to become unintelligible. For further improvement, these problems should be carefully considered in order to generate better synthesized singing voices. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "4." }, { "text": "This research focuses on validating a Taiwanese speech corpus by using speech recognition and assessment to automatically find the potentially problematic utterances. There are three main stages in this work: acoustic model training, speech assessment and error labeling, and performance evaluation. In the acoustic model training stage, we use the ForSD (Formosa Speech Database) ,provided by Chang Gung University (CGU), to train hidden Markov models (HMMs) as the acoustic models. Monophone, biphone (right context dependent), and triphone HMMs are tested. The recognition net is based on free syllable decoding. The best syllable accuracies of these three types of HMMs are 27.20%, 43.28%, and 45.93% respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "In the speech assessment and error labeling stage, we use the trained triphone HMMs to assess the unvalidated parts of the dataset. And then we split the dataset as low-scored dataset, mid-scored dataset, and high-score dataset by different thresholds. For the low-scored dataset, we identify and label the possible cause of having such a lower score. We then extract features from these lower-scored utterances and train an SVM classifier to further examine if each of these low-scored utterances is to be removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "In the performance evaluation stage, we evaluate the effectiveness of finding problematic utterances by using 2 subsets of ForSD, TW01, and TW02 as the training dataset and one of the following: the entire unprocessed dataset, both mid-scored and high-scored dataset, and high-scored dataset only. We use these three types of joint dataset to train and to evaluate the performance. The syllable accuracies of these three types of HMMs are 40.22%, 41.21%, 44.35% respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "From the previous result, the disparity of syllable accuracy between the HMMs trained by unprocessed dataset and processed dataset can be 4.13%. Obviously, it proves that the processed dataset is less problematic than unprocessed dataset. We can use speech assessment automatically to find the potential problematic utterances. /ru-ger-lan-e-sen-hen-bher-e-ue \u300d \uff0c \u5171 9 \u500b \u97f3 \u7bc0 \uff0c \u4f46 \u5207 \u97f3 \u7d50 \u679c \u5f8c \u7684 \u8a9e \u53e5 \u537b \u70ba \u300cger-lan-e-sen-hen-bher-e-ue\u300d \uff0c\u5171 8 \u500b\u97f3\u7bc0\uff0c\u56e0\u6b64\u82e5\u8abf\u6574\u524d\u70ba 80 \u5206\uff0c\u5247\u8abf\u6574\u5f8c\u70ba 80\u00d78/9=71 \u5206\u3002 et al., 1988; Valbret et al., 1992; Stylianou et al., 1998) \uff0c\u8a9e\u97f3\u8f49\u63db\u53ef\u61c9\u7528\u65bc\u929c\u63a5\u8a9e\u97f3\u5408\u6210\u8655\u7406\uff0c\u4ee5\u7372\u5f97\u591a\u6a23\u6027\u7684\u5408\u6210\u8a9e\u97f3\u97f3\u8272\u3002\u53bb\u5e74 \u6211\u5011\u66fe\u5617\u8a66\u4ee5\u7dda\u6027\u591a\u8b8a\u91cf\u8ff4\u6b78(linear multivariate regression, LMR)\u4f86\u5efa\u69cb\u4e00\u7a2e\u983b\u8b5c\u5c0d\u6620 (mapping)\u7684\u6a5f\u5236(\u53e4\u9d3b\u708e\u7b49\uff0c2012)\uff0c\u7136\u5f8c\u7528\u65bc\u4f5c\u8a9e\u97f3\u8f49\u63db\uff0c\u5e0c\u671b\u85c9\u4ee5\u6539\u9032\u50b3\u7d71\u4e0a\u57fa\u65bc\u9ad8", "cite_spans": [ { "start": 469, "end": 482, "text": "et al., 1988;", "ref_id": null }, { "start": 483, "end": 504, "text": "Valbret et al., 1992;", "ref_id": "BIBREF76" }, { "start": 505, "end": 528, "text": "Stylianou et al., 1998)", "ref_id": "BIBREF73" } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "\u65af\u6df7\u5408\u6a21\u578b(Gaussian mixture model, GMM)\u4e4b\u983b\u8b5c\u5c0d\u6620\u6a5f\u5236 (Stylianou et al., 1998) \u5e38\u9047\u5230 \u7684\u4e00\u500b\u554f\u984c\uff0c\u5c31\u662f\u8f49\u63db\u51fa\u7684\u983b\u8b5c\u5305\u7d61(spectral envelope)\u6703\u767c\u751f\u904e\u5ea6\u5e73\u6ed1(over smoothing) \u7684\u73fe\u8c61\u3002\u6211\u5011\u7d93\u7531\u5be6\u9a57\u767c\u73fe\uff0c\u97f3\u6bb5\u5f0f(segmental) LMR \u983b\u8b5c\u5c0d\u6620\u6a5f\u5236\u4e0d\u50c5\u5728\u5e73\u5747\u8f49\u63db\u8aa4\u5dee \u4e0a\u53ef\u4ee5\u6bd4\u50b3\u7d71 GMM \u983b\u8b5c\u5c0d\u6620\u6a5f\u5236\u7372\u5f97\u4e00\u4e9b\u6539\u9032\uff0c\u4e26\u4e14\u8f49\u63db\u51fa\u8a9e\u97f3\u7684\u97f3\u8cea\u4e5f\u6bd4\u50b3\u7d71 GMM \u5c0d\u6620\u7684\u7a0d\u597d\u4e00\u4e9b\u3002\u4e0d\u904e\uff0c\u6574\u9ad4\u800c\u8a00\u97f3\u6bb5\u5f0f LMR \u5c0d\u6620\u6a5f\u5236\u6240\u8f49\u63db\u51fa\u7684\u983b\u8b5c\u5305\u7d61\uff0c\u4ecd\u7136\u5b58 \u5728\u6709\u904e\u5ea6\u5e73\u6ed1\u7684\u73fe\u8c61\uff0c\u800c\u4f7f\u5f97\u8f49\u63db\u51fa\u7684\u8a9e\u97f3\u4ecd\u7136\u4ee4\u4eba\u89ba\u5f97\u6709\u4e00\u4e9b\u60b6\u60b6\u7684\uff0c\u800c\u4e0d\u50cf\u771f\u4eba\u767c \u97f3\u90a3\u6a23\u6e05\u6670\u3002\u524d\u9762\u63d0\u5230\u7684\"\u97f3\u6bb5\u5f0f\" LMR\uff0c\u662f\u6307\u6211\u5011\u5c0d\u65bc\u8a13\u7df4\u8a9e\u6599\u4e2d\u4e0d\u540c\u7684\u97fb\u6bcd\u3001\u6709 \u8072\u8072\u6bcd(\u5982/m, n, l, r/)\u7684\u8a9e\u97f3\u8981\u5206\u5225\u53bb\u5efa\u7acb\u5404\u81ea\u7684 LMR \u77e9\u9663\uff0c\u9019\u662f\u70ba\u4e86\u907f\u514d\u767c\u751f\u4e00\u5c0d\u591a (one to many)\u5c0d\u6620\u7684\u554f\u984c (Godoy et al., 2009) (Stylianou, 1996; Gu & Tsai, 2009) (Torre et al., 2005; Lin et al., 2007) (Toda et al., 2007) \u3001\u548c \u983b\u7387\u8ef8\u6821\u6b63(frequency warping)\u7684\u65b9\u6cd5 (Erro et al., 2010; Godoy et al., 2012 )\uff0c\u4f46\u662f Toda \u7b49 \u4eba\u7684\u65b9\u6cd5 (Toda et al., 2007) (Gu & Tsai, 2009; Stylianou, 1996) \uff0c\u4e4b\u5f8c\u5c31\u53ef\u62ff\u9019 \u4e9b\u53c3\u6578\u53bb\u5408\u6210\u51fa\u8a9e\u97f3\u4fe1\u865f (Gu & Tsai, 2009; Stylianou, 1996) (Kim et al., 1998) \uff0c \u4f86\u5075\u6e2c\u5269\u9918\u97f3\u6846\u7684\u97f3\u9ad8\u983b\u7387\u3002\u4e4b\u5f8c\uff0c\u628a\u4e00\u500b\u8a9e\u8005\u767c\u97f3\u4e2d\u6709\u8072(voiced)\u97f3\u6846\u5075\u6e2c\u51fa\u7684\u97f3\u9ad8\u983b \u7387\u503c\u6536\u96c6\u8d77\u4f86\uff0c\u64da\u4ee5\u7b97\u51fa\u8a72\u8a9e\u8005\u97f3\u9ad8\u7684\u5e73\u5747\u503c\u53ca\u6a19\u6e96\u5dee\uff0c\u800c\u5e73\u5747\u503c\u53ca\u6a19\u6e96\u5dee\u5c31\u662f\u672c\u8ad6\u6587 \u6240\u4f7f\u7528\u7684\u97f3\u9ad8\u53c3\u6578\u3002\u5728\u6b64\u4e00\u500b\u97f3\u6846\u7684\u9577\u5ea6\u8a2d\u70ba 512 \u500b\u6a23\u672c\u9ede(23.2ms)\uff0c\u800c\u97f3\u6846\u4f4d\u79fb\u5247\u8a2d \u70ba 128 \u500b\u6a23\u672c\u9ede(5.8ms)\u3002\u6b64\u5916\uff0c\u5c0d\u65bc\u4e00\u500b\u97f3\u6846\u7684\u983b\u8b5c\u4fc2\u6578\uff0c\u6211\u5011\u4f7f\u7528\u5148\u524d\u767c\u5c55\u7684 DCC \u4f30\u8a08\u7a0b\u5f0f (Gu & Tsai, 2009) 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000", "cite_spans": [ { "start": 42, "end": 66, "text": "(Stylianou et al., 1998)", "ref_id": "BIBREF73" }, { "start": 410, "end": 430, "text": "(Godoy et al., 2009)", "ref_id": "BIBREF64" }, { "start": 431, "end": 448, "text": "(Stylianou, 1996;", "ref_id": "BIBREF72" }, { "start": 449, "end": 465, "text": "Gu & Tsai, 2009)", "ref_id": "BIBREF66" }, { "start": 466, "end": 486, "text": "(Torre et al., 2005;", "ref_id": "BIBREF75" }, { "start": 487, "end": 504, "text": "Lin et al., 2007)", "ref_id": "BIBREF71" }, { "start": 505, "end": 524, "text": "(Toda et al., 2007)", "ref_id": "BIBREF74" }, { "start": 556, "end": 575, "text": "(Erro et al., 2010;", "ref_id": null }, { "start": 576, "end": 594, "text": "Godoy et al., 2012", "ref_id": "BIBREF65" }, { "start": 612, "end": 631, "text": "(Toda et al., 2007)", "ref_id": "BIBREF74" }, { "start": 632, "end": 649, "text": "(Gu & Tsai, 2009;", "ref_id": "BIBREF66" }, { "start": 650, "end": 666, "text": "Stylianou, 1996)", "ref_id": "BIBREF72" }, { "start": 687, "end": 704, "text": "(Gu & Tsai, 2009;", "ref_id": "BIBREF66" }, { "start": 705, "end": 721, "text": "Stylianou, 1996)", "ref_id": "BIBREF72" }, { "start": 722, "end": 740, "text": "(Kim et al., 1998)", "ref_id": "BIBREF70" }, { "start": 924, "end": 941, "text": "(Gu & Tsai, 2009)", "ref_id": "BIBREF66" } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(a) \u5047\u8a2d\u67d0\u4e00\u97f3\u6bb5\u985e\u5225\u7684\u8a13\u7df4\u8a9e\u97f3\u7e3d\u5171\u53ef\u5207\u6210 M \u500b\u97f3\u6846\uff0c\u800c\u6bcf\u500b\u97f3\u6846\u7d93\u7531\u8a08\u7b97\u53ef\u5f97\u5230\u4e00\u500b DCC \u4fc2\u6578\u7684\u5411\u91cf\uff0c\u7136\u5f8c\u628a\u5168\u90e8\u97f3\u6846\u7684 DCC \u5411\u91cf\u4e26\u5217\u6210\u5404\u6b04(column)\u7684\u65b9\u5f0f\uff0c\u8868\u793a\u6210\u5927 \u5c0f\u70ba L\u00d7M \u7684\u77e9\u9663 1 2 [ , , , ] M \uf047 \uf03d \uf047 \uf047 \uf047 \uf04b \uff0c\u5176\u4e2d L \u8868\u793a DCC \u4fc2\u6578\u7684\u968e\u6578\uff0cM \u7684\u503c\u5927\u65bc L\u3002 (b) \u63a5\u8457\u6c42\u51fa\u9019 M \u500b\u97f3\u6846\u4e4b DCC \u5411\u91cf\u7684\u5e73\u5747\u5411\u91cf \uf059 \uff0c \uf059 \u4ee3\u8868\u8457\u9019 M \u500b\u97f3\u6846\u5171\u6709\u7684 DCC \u5411\u91cf\u6210\u5206\u3002 (c) \u5c07\u7b2c i \u500b\u97f3\u6846\u7684 DCC \u5411\u91cf\u4f5c\u6a19\u6e96\u5316\uff0c\u5373\u6e1b\u53bb\u5e73\u5747\u5411\u91cf \uf059 \uff0c\u800c\u5f97\u5230\u4e00\u500b\u5dee\u503c\u5411\u91cf i \uf046 \u3002 (d) \u4f7f\u7528\u6240\u6709\u7684\u5dee\u503c\u5411\u91cf i \uf046 \uff0c\u4f86\u8a08\u7b97\u51fa\u4e00\u500b\u5171\u8b8a\u7570\u77e9\u9663 \uf04c \u3002 T 1 M i i i\uf03d \uf04c \uf03d \uf046 \uf046 \uf0e5 (1) (e) \u5c0d\u77e9\u9663 \uf04c \u6c42\u5176\u7279\u5fb5\u503c(eigen value) i \uf06c \u8207\u7279\u5fb5\u5411\u91cf(eigen vector) i \uf067 \u3002 , 1,2, , i i i i L \uf067 \uf06c \uf067 \uf04c \uf0d7 \uf03d \uf0d7 \uf03d \uf04b (2) (f) \u6c42\u5f97\u7279\u5fb5\u5411\u91cf i \uf067 \u5f8c\uff0c\u9032\u4e00\u6b65\u5c0d i \uf067 \u4f5c\u6b63\u898f\u5316\uff0c\u4ee5\u53d6\u5f97 L \u500b\u4e3b\u6210\u5206\u57fa\u5e95\u5411\u91cf i \uf06d \u3002 1 2 2 2 2 1 2 T ( ) ( ) ( ) , 1,2, , , , 1,2, , i i i L i i i i i i i L i i L i L \uf067 \uf067 \uf067 \uf075 \uf075 \uf075 \uf075 \uf067 \uf067 \uf067 \uf06d \uf03d \uf02b \uf02b \uf02b \uf03d \uf0e9 \uf0f9 \uf03d \uf03d \uf0ea \uf0fa \uf0eb \uf0fb \uf04c \uf04b \uf04c \uf04b (3) 2.1.2 \u4e3b\u6210\u5206\u4fc2\u6578\u8f49\u63db \u7576\u6211\u5011\u5c0d\u67d0\u4e00\u500b\u97f3\u6bb5\u985e\u5225\u505a\u5b8c\u4e3b\u6210\u5206\u5206\u6790\u5f8c\uff0c\u5c31\u53ef\u5f97\u5230\u8a72\u985e\u5225\u7684 DCC \u5e73\u5747\u5411\u91cf \uf059 \u3001L \u500b\u4e3b\u6210\u5206\u57fa\u5e95\u5411\u91cf i \uf06d \u3002\u63a5\u8457\uff0c\u8981\u628a\u5404\u500b\u97f3\u6846\u7684 DCC \u4fc2\u6578\u8f49\u63db\u6210 PCA \u4fc2\u6578\uff0c\u9996\u5148\u628a\u4e00\u500b \u97f3\u6846\u7684 DCC \u5411\u91cf i \uf047 \u6e1b\u53bb DCC \u5e73\u5747\u5411\u91cf \uf059 \u800c\u5f97\u5230\u5dee\u503c\u5411\u91cf i \uf046 \uff0c\u518d\u5c07 i \uf046 \u5206\u5225\u6295\u5f71\u5230\u5404\u500b \u4e3b\u6210\u5206\u57fa\u5e95\u5411\u91cf j \uf06d \uff0c\u6295\u5f71\u516c\u5f0f\u70ba: T , 1,2, , ij j i j L \uf077 \uf06d \uf03d \uf0d7\uf046 \uf03d \uf04b (4) \u5982\u6b64\u5c31\u53ef\u5f97\u5230 DCC \u5411\u91cf i \uf047 \u7684 L \u500b\u4e3b\u6210\u5206\u4fc2\u6578(\u4ea6\u7a31\u70ba PCA \u4fc2\u6578)\uff0c\u518d\u7528\u4ee5\u5f62\u6210 L \u7dad\u5ea6\u7684 \u4e3b\u6210\u5206\u4fc2\u6578(PCA \u4fc2\u6578)\u4e4b\u5411\u91cf\uff1a \uf05b \uf05d T 1 2 , , i i i i L \uf077 \uf077 \uf077 \uf057 \uf03d \uf04b", "eq_num": "(5)" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 L i j i j j \uf06d \uf077 \uf03d \uf047 \uf03d \uf059 \uf02b \uf0d7 \uf0e5", "eq_num": "(6" } ], "section": "Abstract", "sec_num": null }, { "text": "(a) \u4ee4\u5340\u9593\u6578\u70ba N\uff0c\u4e26\u4e14\u5c0d\u5404\u500b\u7dad\u5ea6 i\uff0ci=1, 2, \u2026, L\uff0c\u5206\u5225\u4f5c\u4e0b\u5217\u6b65\u9a5f\u7684\u8655\u7406\u3002 (b) \u5c07 M \u500b\u97f3\u6846\u4e2d\u6240\u6709\u4f4d\u65bc\u7b2c i \u7dad\u5ea6\u7684 PCA \u4fc2\u6578\u6311\u51fa\uff0c\u7136\u5f8c\u4f9d\u4fc2\u6578\u503c\u4f5c\u7531\u5c0f\u5230\u5927\u4e4b\u6392\u5e8f\uff0c \u6392\u5e8f\u5f8c\u5247\u628a M \u500b PCA \u4fc2\u6578\u4f9d\u9806\u5e8f\u4e14\u5e73\u5747\u5730\u5206\u914d\u5230 N \u500b\u5340\u9593\u3002 (c) \u5340\u9593\u7de8\u865f j \u5f9e 1 \u8b8a\u5230 N\uff0c\u5c0d\u65bc\u7b2c j \u500b\u5340\u9593\u5167\u7684 PCA \u4fc2\u6578\uff0c\u6311\u9078\u6392\u5e8f\u4f4d\u65bc\u4e2d\u9593(median) \u7684 PCA \u4fc2\u6578\u6578\u503c\uff0c\u7136\u5f8c\u8a18\u9304\u8a72 PCA \u4fc2\u6578\u503c\u70ba j i Fp \uff0c\u4e26\u4e14\u8a18\u9304\u5176\u5c0d\u61c9\u7684 CDF \u503c\u70ba j i Fc \uff0c CDF \u503c\u5c31\u662f\u8a72 PCA \u4fc2\u6578\u5728\u5168\u9ad4(M \u500b)\u4fc2\u6578\u6392\u5e8f\u4e2d\u7684\u9806\u5e8f\u503c\u9664\u4ee5 M\u3002 (d) \u8a18\u9304\u7b2c i \u7dad\u5ea6 PCA \u4fc2\u6578\u7684\u6700\u5927\u503c\u70ba 1 N i Fp \uf02b \uff0c\u4e14\u8a18\u9304\u5176\u5c0d\u61c9\u7684 CDF \u503c\u70ba 1 1 N i Fc \uf02b \uf03d \uff1b\u6b64 \u5916\uff0c\u8a18\u9304\u7b2c i \u7dad\u5ea6 PCA \u4fc2\u6578\u7684\u6700\u5c0f\u503c\u70ba 0 i Fp \uff0c\u4e14\u8a18\u9304\u5176\u5c0d\u61c9\u7684 CDF \u503c\u70ba 0 1 i M Fc \uf03d \u3002 \u7576\u6240\u6709\u7dad\u5ea6\u90fd\u5b8c\u6210\u4e0a\u8ff0\u6b65\u9a5f\uff0c\u5247\u8a72\u97f3\u6bb5\u985e\u5225\u7684", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "\uf05b \uf05d 1 2 , , , L Q Q Q Q \uf03d \uf04c \uff0c\u7dda\u6027\u5167\u63d2\u4e4b\u516c\u5f0f\u5982\u4e0b\uff1a \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 1 1 , 1,2, . j i i j j j i i i i j j i i P Fp Q Fc Fc Fc i L Fp Fp \uf02b \uf02b \uf0e9 \uf0f9 \uf02d \uf0ea \uf0fa \uf03d \uf02b \uf02d \uf0d7 \uf03d \uf0ea \uf0fa \uf02d \uf0ea \uf0fa \uf0eb \uf0fb \uf04b (7) \u516c\u5f0f(7)\u4e2d i \u8868\u793a\u7dad\u5ea6\u7de8\u865f\uff0c j i Fp \u3001 j i Fc \u5206\u5225\u70ba HEQ \u8868\u683c\u88e1\u6240\u8a18\u9304\u7684\u7b2c j \u5340\u9593\u7684 PCA \u4fc2\u6578 \u503c\u3001CDF \u503c\uff0c\u4e26\u4e14\u5047\u8a2d\u6211\u5011\u5df2\u4f5c\u904e\u641c\u5c0b\u800c\u5f97\u77e5 i P \u7684\u503c\u843d\u65bc j i Fp \u8207 1 j i Fp \uf02b \u4e4b\u9593\u3002 2.2.3 CDF\u53cd\u8f49\u63db \u5047\u8a2d\u6709\u4e00\u500b\u97f3\u6846\u7684 CDF \u5411\u91cf \uf05b \uf05d 1 2 , , , L Q Q Q Q \uf03d \uf04c \u8981\u88ab\u53cd\u8f49\u63db\u6210 PCA \u4fc2\u6578\u5411\u91cf\uff0c\u800c\u8a72\u97f3\u6846 \u6240\u5c6c\u7684\u97f3\u6bb5\u985e\u5225\u8cc7\u8a0a\uff0c\u5df2\u7d93\u5728\u5716 2 \u7684\"\u97f3\u6bb5\u5075\u6e2c\"\u65b9\u584a\u6c7a\u5b9a\u51fa\u4f86\uff0c\u6240\u4ee5\u6211\u5011\u53ef\u4ee5\u53d6\u51fa\u8a72 \u97f3\u6bb5\u985e\u5225\u7684\u76ee\u6a19\u97f3\u6846\u6240\u8a13\u7df4\u51fa\u7684 HEQ \u8868\u683c\uff0c\u7136\u5f8c\u4ee5\u7dda\u6027\u5167\u63d2\u7684\u65b9\u5f0f\u4f86\u8a08\u7b97\u51fa\u8a72\u97f3\u6846\u7684 PCA \u4fc2\u6578\u5411\u91cf \uf05b \uf05d 1 2 , , , L P P P P \uf03d \uf04c \uff0c\u7dda\u6027\u5167\u63d2\u4e4b\u516c\u5f0f\u5982\u4e0b\uff1a \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 1 1 , 1,2, . j i i j j j i i i i j j i i Q Fc P Fp Fp Fp i L Fc Fc \uf02b \uf02b \uf0e9 \uf0f9 \uf02d \uf0ea \uf0fa \uf03d \uf02b \uf02d \uf0d7 \uf03d \uf0ea \uf0fa \uf02d \uf0ea \uf0fa \uf0eb \uf0fb \uf04b (8) \u516c\u5f0f(8)\u4e2d i \u8868\u793a\u7dad\u5ea6\u7de8\u865f\uff0c j i Fp \u3001 j i Fc \u5206\u5225\u70ba HEQ \u8868\u683c\u88e1\u6240\u8a18\u9304\u7684\u7b2c j \u5340\u9593\u7684 PCA \u4fc2\u6578 \u503c\u3001CDF \u503c\uff0c\u4e26\u4e14\u5047\u8a2d\u6211\u5011\u5df2\u4f5c\u904e\u641c\u5c0b\u800c\u5f97\u77e5 i Q \u7684\u503c\u843d\u65bc j i Fc \u8207 1 j i Fc \uf02b \u4e4b\u9593\u3002 3. \u76ee\u6a19\u97f3\u6846\u6311\u9078 \u5728\u8a13\u7df4\u968e\u6bb5\uff0c\u6211\u5011\u53ef\u9810\u5148\u628a\u76ee\u6a19\u8a9e\u8005\u7684\u8a13\u7df4\u8a9e\u97f3\u4f9d\u64da\u6a19\u793a\u6a94\u7684\u8cc7\u8a0a\u62ff\u53bb\u4f5c\u97f3\u6bb5\u5206\u985e\uff0c\u4e26 \u4e14\u5c0d\u5404\u7a2e\u97f3\u6bb5\u5206\u5225\u4f5c\u97f3\u6846\u7684\u6536\u96c6\uff0c\u4e4b\u5f8c\u5728\u8f49\u63db\u968e\u6bb5\uff0c\u5c31\u53ef\u4f9d\u64da\u6240\u5075\u6e2c\u51fa\u7684\u97f3\u6bb5\u4ee3\u865f\u53bb\u53d6 \u51fa\u5c0d\u61c9\u7684\u97f3\u6846\u96c6\uff0c\u518d\u4f9d\u64da\u6240\u8f49\u63db\u51fa\u7684 DCC \u5411\u91cf\u53bb\u4f5c\u771f\u5be6\u97f3\u6846\u7684\u641c\u5c0b\u8207\u6311\u9078\u3002 \u4ee4 Y 1 , Y 2 , \u2026, Y T \u662f\u4e00\u5e8f\u5217 T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "\u4f86\u8a08\u7b97\u51fa 41 \u7dad\u7684 DCC \u4fc2\u6578\u3002 \u5728\u8a13\u7df4 LMR \u5c0d\u6620\u77e9\u9663\u4e4b\u524d\uff0c\u6211\u5011\u9010\u4e00\u5c0d\u5404\u500b\u8072\u3001\u97fb\u6bcd\u985e\u5225\u6240\u6536\u96c6\u7684\u5e73\u884c\u767c\u97f3\u97f3\u6bb5 \u4f5c DTW \u5339\u914d\uff0c\u4ee5\u4fbf\u70ba\u4f86\u6e90\u8a9e\u8005\u97f3\u6bb5\u6240\u5207\u51fa\u7684\u5404\u500b\u97f3\u6846\uff0c\u53bb\u76ee\u6a19\u8a9e\u8005\u4e4b\u5e73\u884c\u97f3\u6bb5\u5167\u627e\u51fa \u6b63\u78ba\u7684\u97f3\u6846\u4f86\u5c0d\u61c9\u3002\u7136\u5f8c\uff0c\u628a\u5404\u500b\u5e73\u884c\u97f3\u6bb5\u7684\u97f3\u6846\u5e8f\u5217\u4e32\u63a5\u8d77\u4f86\uff0c\u5c31\u53ef\u70ba\u4e00\u500b\u8072\u3001\u97fb\u6bcd \u985e\u5225\u6e96\u5099\u597d\u4e00\u5e8f\u5217\u7684\u4f86\u6e90\u97f3\u6846\u548c\u76ee\u6a19\u97f3\u6846\u7684 DCC \u5411\u91cf\u5c0d\u61c9\u7d44\u5408\uff0c(S i , R i )\uff0ci=1, 2, \u2026, Nr\uff0c \u5176\u4e2d S i \u8868\u793a\u7b2c i \u500b\u4f86\u6e90\u97f3\u6846\u7684 DCC \u5411\u91cf\uff0cR i \u8868\u793a\u7b2c i \u500b\u7d93 DTW \u914d\u5c0d\u5230\u7684\u76ee\u6a19\u97f3\u6846\u7684 DCC \u5411\u91cf\uff0cNr \u8868\u793a\u6b64\u4e00\u5e8f\u5217\u7684\u97f3\u6846\u7e3d\u6578\u3002\u518d\u4f86\uff0c\u4f9d\u7167\u6240\u5efa\u69cb\u7cfb\u7d71\u7684\u7d50\u69cb\uff0c\u82e5\u662f\u5982\u5716 3 \u7684\u6d41\u7a0b\uff0c \u5247\u5404\u500b\u8072\u3001\u97fb\u6bcd\u985e\u5225\u7684\u4e00\u5e8f\u5217\u7684\u4f86\u6e90\u8207\u76ee\u6a19\u97f3\u6846\u5c0d\u61c9\u7684 DCC \u5411\u91cf\u7d44\u5408\uff0c\u5c31\u53ef\u76f4\u63a5\u62ff\u53bb\u8a13 \u7df4\u8a08\u7b97 LMR \u5c0d\u6620\u6240\u9700\u7684\u5c0d\u6620\u77e9\u9663(\u53e4\u9d3b\u708e\u7b49\uff0c2012)\uff1b\u7136\u800c\u7576\u7cfb\u7d71\u7684\u7d50\u69cb\u662f\u5982\u5716 2 \u6240\u793a\u7684 \u6d41\u7a0b\u6642\uff0c\u5247\u5404\u500b\u8072\u3001\u97fb\u6bcd\u985e\u5225\u7684 DCC \u5411\u91cf\u7d44\u5408\u5e8f\u5217\uff0c(S i , R i )\uff0ci=1, 2, \u2026, Nr\uff0c\u5176\u4e2d\u5404\u500b \u7d44\u5408\u7684 S i \u8207 R i \u5c31\u5fc5\u9808\u5148\u4f5c PCA \u4fc2\u6578\u8f49\u63db\u548c CDF \u4fc2\u6578\u8f49\u63db\uff0c\u4ee5\u5f62\u6210 CDF \u4fc2\u6578\u7684\u5411\u91cf\u7d44 \u5408\uff0c\u7136\u5f8c\u624d\u62ff\u53bb\u8a13\u7df4 LMR \u5c0d\u6620\u4e4b\u77e9\u9663\u3002 \u8a2d S \uf025 \u3001 R \uf025 \u77e9\u9663\u7684\u5b9a\u7fa9\u5982\u4e0b\u6240\u5217\uff0c 1 2 1 2 , , 1, 1, 1 1, 1, 1 Nr Nr S S S R R R S R \uf0e9 \uf0f9 \uf0e9 \uf0f9 \uf03d \uf03d \uf0ea \uf0fa \uf0ea \uf0fa \uf0eb \uf0fb \uf0eb \uf0fb \uf04b \uf04b \uf04b \uf04b \uf025 \uf025 (11) \u5176\u4e2d\u5404\u884c\u7684 S i \u8207 R i \u90fd\u88ab\u9644\u52a0\u4e00\u5217\u7684\u5e38\u6578 1\uff0c\u4ee5\u589e\u52a0\u4e00\u500b\u5e38\u6578\u9805\u81f3\u591a\u8b8a\u91cf\u7dda\u6027\u8ff4\u6b78\u7684\u5404\u500b \u7dad\u5ea6\u88e1\uff0c\u5982\u6b64\uff0cLMR \u5c0d\u6620\u6240\u9700\u7684\u6700\u4f73(least squared error)\u5c0d\u6620\u77e9\u9663 M \uf025 \uff0c\u5c31\u53ef\u4ee5\u4e0b\u5217\u516c\u5f0f(\u53e4 \u9d3b\u708e\u7b49\uff0c2012)\u4f86\u6c42\u5f97\uff0c t t 1 ( ) . M R S S S \uf02d \uf03d \uf0d7 \uf0d7 \uf0d7 \uf025 \uf025 \uf025 \uf025 \uf025 (12) \u7136\u5f8c\uff0c\u6211\u5011\u5c31\u53ef\u7528\u77e9\u9663 M \uf025 \u4f86\u4f5c LMR \u5c0d\u6620\uff0c\u5373\u4ee4[Y t , 1] t = M \uf025 \uff0e[X t , 1] t \uff0c\u5176\u4e2d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "Magnitude (dB) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "Target ConvLMR RealFrm \u53e4\u9d3b\u708e\u3001\u5f35\u5bb6\u7dad \u8f49\u63db\u5f8c\u97f3\u6846\u8207\u76ee\u6a19\u97f3\u6846\u7684\u983b\u8b5c\u4fc2\u6578\u4e4b\u9593\uff0c\u8aa4\u5dee\u8ddd\u96e2\u5e73\u5747\u503c\u7684\u5927\u5c0f\u4e26\u4e0d\u80fd\u5920\u4ee3\u8868\u8a9e\u97f3 \u54c1\u8cea\u7684\u597d\u58de\uff0c\u9019\u6a23\u7684\u60c5\u5f62\u5728\u524d\u4eba\u7684\u7814\u7a76\u4e2d\u5df2\u7d93\u6ce8\u610f\u5230\u4e86\uff0c\u6240\u4ee5 Godoy \u7b49\u4eba(Godoy et al., 2012) \u63a1\u7528\u4ee5\u8b8a\u7570\u6578\u6bd4\u503c(variance ratio, VR)\u4f86\u91cf\u6e2c\u8f49\u63db\u5f8c\u8a9e\u97f3\u7684\u54c1\u8cea\uff0c\u8b8a\u7570\u6578\u6bd4\u503c\u7684\u91cf\u6e2c \u516c\u5f0f\u70ba: \u53e6\u4e00\u7a2e\u6539\u9032\u8a9e\u97f3\u54c1\u8cea\u7684\u65b9\u6cd5\u662f\uff0c\u5728\u5716 1 \u6d41\u7a0b\u7684 LMR \u5c0d\u6620\u8207 HNM \u8a9e\u97f3\u518d\u5408\u6210\u4e4b\u9593\u63d2 \u5165\"\u76ee\u6a19\u97f3\u6846\u6311\u9078\"\u4e4b\u8655\u7406\uff0c\u96d6\u7136\u8a9e\u97f3\u8f49\u63db\u7684\u5e73\u5747\u8aa4\u5dee\u8ddd\u96e2\u6703\u7531 0.5382 \u8b8a\u5927\u6210\u70ba 0.6029\uff0c \u4f46\u662f\u5ba2\u89c0 VR \u503c\u7684\u91cf\u6e2c\u53ca\u4e3b\u89c0\u807d\u6e2c\u5be6\u9a57\u7684\u7d50\u679c\u90fd\u986f\u793a\uff0c\u8f49\u63db\u51fa\u8a9e\u97f3\u7684\u54c1\u8cea\u78ba\u5be6\u662f\u660e\u986f\u5730 \u63d0\u5347\u4e86\uff0c\u4e0d\u8ad6 LMR \u983b\u8b5c\u5c0d\u6620\u65b9\u584a\u4e4b\u524d\u6709\u5426\u4f5c\u904e\u76f4\u65b9\u5716\u7b49\u5316\u7684\u8655\u7406\uff0c\u6240\u4ee5\"\u76ee\u6a19\u97f3\u6846\u6311 \u9078\"\u6bd4\u8d77\"\u76f4\u65b9\u5716\u7b49\u5316\"\uff0c\u5c0d\u65bc\u8f49\u63db\u51fa\u8a9e\u97f3\u4e4b\u54c1\u8cea\u63d0\u5347\u66f4\u70ba\u6709\u529f\u6548\uff0c\u4e26\u4e14 VR \u503c\u5927\u9ad4\u4e0a \u53ef\u53cd\u61c9\u51fa\u8a9e\u97f3\u7684\u54c1\u8cea\u3002\u53e6\u5916\uff0c\u5c0d\u65bc\u5e73\u5747\u8aa4\u5dee\u8ddd\u96e2\u6108\u5927\u53cd\u800c\u5f97\u5230\u6108\u597d\u7684\u8a9e\u97f3\u54c1\u8cea\uff0c\u9019\u7a2e\u4e0d \u4e00\u81f4\u6027\u7684\u60c5\u6cc1\uff0c\u6211\u5011\u89c0\u5bdf\u4e00\u4e9b\u97f3\u6846\u7684\u983b\u8b5c\u5305\u7d61\u66f2\u7dda\u5f8c\u767c\u73fe\uff0c\u8f49\u63db\u51fa\u4e4b\u8a9e\u97f3\u807d\u8d77\u4f86\u6bd4\u8f03\u6a21 \u7cca\u8005\uff0c\u901a\u5e38\u5176\u983b\u8b5c\u5305\u7d61\u5728 2,500 Hz \u81f3 4,500 Hz \u4e4b\u983b\u7387\u7bc4\u570d\uff0c\u6703\u986f\u73fe\u904e\u5ea6\u5e73\u6ed1\u7684\u60c5\u5f62\uff0c\u4e26 \u4e14\u6bd4\u8d77\u6e05\u6670\u8005\u8f03\u70ba\u9060\u96e2\u76ee\u6a19\u983b\u8b5c\u5305\u7d61\u66f2\u7dda\uff1b\u7136\u800c\u5728 5,000 Hz \u4e4b\u5f8c\u7684\u983b\u7387\u7bc4\u570d\uff0c\u96d6\u7136\u6a21\u7cca \u8005\u7684\u983b\u8b5c\u5305\u7d61\u4e5f\u662f\u986f\u73fe\u904e\u5ea6\u5e73\u6ed1\u7684\u60c5\u5f62\uff0c\u4f46\u662f\u6bd4\u8d77\u6e05\u6670\u8005\u537b\u8f03\u70ba\u63a5\u8fd1\u76ee\u6a19\u983b\u8b5c\u5305\u7d61\u66f2\u7dda\uff0c \u6240\u4ee5\u6703\u8a08\u7b97\u51fa\u6bd4\u8f03\u5c0f\u7684\u8aa4\u5dee\u8ddd\u96e2\u3002 \u81f4\u8b1d \u611f\u8b1d\u570b\u79d1\u6703\u8a08\u756b\u4e4b\u7d93\u8cbb\u652f\u63f4\uff0c\u570b\u79d1\u6703\u8a08\u756b\u7de8\u865f NSC 101-2221-E-011-144\u3002 \u53c3\u8003\u6587\u737b Abe, M., Nakamura, S., Shikano, K., & Kuwabara, H. (1988) . Voice Conversion through Vector Quantization. Int. Conf. Acoustics, Speech, and Signal Processing, 1, 655-658. Capp\u00e9, O., & Moulines, E. (1996) . Regularization Techniques for Discrete Cepstrum Estimation. IEEE Signal Processing Letters, 3 (4) , 100-102. Dutoit, T., Holzapfel, A., Jottrand, M., Moinet, A., Perez, J., & Stylianou, Y. (2007) . Towards a Voice Conversion System Based on Frame Selection. Int. Conf. Acoustics, Speech, and signal Processing, Honolulu, Hawaii, 513-516. Erro, D., Moreno, A., & Bonafonte, A. (2010) . Voice Conversion Based on Weighted Frequency Warping. IEEE trans. Audio, Speech, and Language Processing, 18, [922] [923] [924] [925] [926] [927] [928] [929] [930] [931] \u8303\u9865\u9a30 \u7b49 series, in which the linear prediction error component is removed, reveals more noise-robust than the original one, probably because the prediction error portion corresponding to the noise effect is alleviated accordingly. Experiments conducted on the Aurora-2 connected digit database shows that the presented approach can enhance the noise robustness of various types of features in terms of significant improvement in recognition performance under a wide range of noise environments. Furthermore, a low order of linear prediction for the presented method suffices to give promising performance, which implies this method can be implemented in a quite efficient manner.", "cite_spans": [ { "start": 68, "end": 124, "text": "\u54c1\u8cea\u7684\u597d\u58de\uff0c\u9019\u6a23\u7684\u60c5\u5f62\u5728\u524d\u4eba\u7684\u7814\u7a76\u4e2d\u5df2\u7d93\u6ce8\u610f\u5230\u4e86\uff0c\u6240\u4ee5 Godoy \u7b49\u4eba(Godoy et al., 2012)", "ref_id": null }, { "start": 701, "end": 735, "text": "Shikano, K., & Kuwabara, H. (1988)", "ref_id": null }, { "start": 789, "end": 881, "text": "Conf. Acoustics, Speech, and Signal Processing, 1, 655-658. Capp\u00e9, O., & Moulines, E. (1996)", "ref_id": null }, { "start": 978, "end": 981, "text": "(4)", "ref_id": "BIBREF120" }, { "start": 984, "end": 1079, "text": "100-102. Dutoit, T., Holzapfel, A., Jottrand, M., Moinet, A., Perez, J., & Stylianou, Y. (2007)", "ref_id": null }, { "start": 1147, "end": 1266, "text": "Conf. Acoustics, Speech, and signal Processing, Honolulu, Hawaii, 513-516. Erro, D., Moreno, A., & Bonafonte, A. (2010)", "ref_id": null }, { "start": 1335, "end": 1341, "text": "Audio,", "ref_id": null }, { "start": 1342, "end": 1349, "text": "Speech,", "ref_id": null }, { "start": 1350, "end": 1374, "text": "and Language Processing,", "ref_id": null }, { "start": 1375, "end": 1378, "text": "18,", "ref_id": null }, { "start": 1379, "end": 1384, "text": "[922]", "ref_id": null }, { "start": 1385, "end": 1390, "text": "[923]", "ref_id": null }, { "start": 1391, "end": 1396, "text": "[924]", "ref_id": null }, { "start": 1397, "end": 1402, "text": "[925]", "ref_id": null }, { "start": 1403, "end": 1408, "text": "[926]", "ref_id": null }, { "start": 1409, "end": 1414, "text": "[927]", "ref_id": null }, { "start": 1415, "end": 1420, "text": "[928]", "ref_id": null }, { "start": 1421, "end": 1426, "text": "[929]", "ref_id": null }, { "start": 1427, "end": 1432, "text": "[930]", "ref_id": null }, { "start": 1433, "end": 1438, "text": "[931]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Frequency (Hz)", "sec_num": null }, { "text": "1 1 1 1 , k i k i C L i k C L VR \uf073 \uf073 \uf03d \uf03d \uf03d \uf0d7 \uf0e5 \uf0e5 (13) \u5176\u4e2d C \u8868\u793a\u97f3\u6bb5\u7684\u985e\u5225\u6578\uff0cL \u8868\u793a\u983b\u8b5c\u7279\u5fb5\u5411\u91cf\u7684\u7dad\u5ea6\uff0c\u02c6k i \uf073 \u8868\u793a\u8f49\u63db\u5f8c\u97f3\u6846\u4e2d\u7b2c i \u985e\u97f3 \u6bb5\u7b2c k \u7dad\u983b\u8b5c\u4fc2\u6578\u7684\u8b8a\u7570\u6578\uff0c k i \uf073 \u5247\u8868\u793a\u76ee\u6a19\u97f3\u6846\u7b2c i \u985e\u97f3\u6bb5\u7b2c k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frequency (Hz)", "sec_num": null }, { "text": "Keywords: Noise Robustness, Speech Recognition, Linear Predictive Coding, Temporal Filtering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frequency (Hz)", "sec_num": null }, { "text": "(1)\u5f37\u5065\u6027\u8a9e\u97f3\u7279\u5fb5\u53c3\u6578(robust speech feature)\u6c42\u53d6 \u9019\u985e\u65b9\u6cd5\u4e3b\u8981\u76ee\u7684\u5728\u62bd\u53d6\u4e0d\u6613\u53d7\u5230\u5916\u5728\u74b0\u5883\u5e72\u64fe\u4e0b\u800c\u5931\u771f\u7684\u8a9e\u97f3\u7279\u5fb5\u53c3\u6578\uff0c\u6216\u5f9e\u539f\u59cb\u8a9e \u97f3\u7279\u5fb5\u4e2d\u5118\u91cf\u524a\u6e1b\u96dc\u8a0a\u9020\u6210\u7684\u6548\u61c9\uff0c\u5176\u4e2d\u5e38\u61c9\u7528\u7684\u662f\u63a2\u7a76\u8a9e\u97f3\u8a0a\u865f\u8207\u96dc\u8a0a\u5e72\u64fe\u4e0d\u540c\u7684\u7279 \u6027\u3001\u85c9\u7531\u51f8\u986f\u5176\u5dee\u7570\u5c07\u4e8c\u8005\u5118\u91cf\u5206\u96e2\uff0c\u9019\u985e\u7684\u65b9\u6cd5\u53ef\u4f7f\u7528\u8a9e\u97f3\u8a0a\u865f\u7684\u4e0d\u540c\u9818\u57df(domain) \u4e0a\uff0c\u5206\u5225\u767c\u63ee\u4e0d\u540c\u7684\u6548\u679c\uff0c\u4f8b\u5982\u5e38\u898b\u7684\u6642\u57df\u3001\u983b\u57df\u3001\u5c0d\u6578\u983b\u57df\u8207\u5012\u983b\u8b5c\u4e4b\u6642\u9593\u5e8f\u5217\u57df\u7b49\u3002 \u6bd4\u8f03\u77e5\u540d\u7684\u65b9\u6cd5\u6709\uff1a\u983b\u57df\u4e0a\u7684\u983b\u8b5c\u6d88\u53bb\u6cd5(spectral subtraction, SS) (Boll, 1979) \u3001\u97cb \u7d0d\u6ffe\u6ce2\u5668\u6cd5(Wiener filtering, WF) (Plapous et al., 2006) \u3001\u5c0d\u6578\u983b\u57df\u4e0a\u7684\u5c0d\u6578\u983b\u8b5c\u5e73\u5747 \u6d88\u53bb\u6cd5(logarithmic spectral mean subtraction, LSMS) (Gelbart & Morgan, 2001 )\u8207\u57fa\u65bc \u96d9 \u8072 \u9053 \u8a9e \u97f3 \u4e4b \u7247 \u6bb5 \u7dda \u6027 \u88dc \u511f \u6cd5 ( Stereo-based Piecewise Linear Compensation for Environments, SPLICE) (Deng et al., 2003) \uff0c\u5012\u983b\u8b5c\u4e4b\u6642\u9593\u5e8f\u5217\u57df\u4e0a\u7684\u5012\u983b\u8b5c\u5e73\u5747\u6d88\u53bb \u6cd5(cepstral mean subtraction, CMS) (Furui, 1981) (Tu et al., 2009) \u7b49\uff0c\u9019\u4e9b\u65b9\u6cd5\u8ddf\u524d\u4eba\u6240\u63d0\u7684\u8a31\u591a\u6280\u8853\u5e7e\u4e4e\u90fd\u53ef\u4ee5\u6709\u826f\u597d \u7684\u52a0\u6210\u6027\u3001\u5c0d\u65bc\u8a9e\u97f3\u7279\u5fb5\u66f4\u597d\u7684\u5f37\u5065\u6027\u52a0\u5f37\u6548\u679c\u3002", "cite_spans": [ { "start": 244, "end": 256, "text": "(Boll, 1979)", "ref_id": "BIBREF79" }, { "start": 288, "end": 310, "text": "(Plapous et al., 2006)", "ref_id": "BIBREF95" }, { "start": 374, "end": 397, "text": "(Gelbart & Morgan, 2001", "ref_id": "BIBREF86" }, { "start": 500, "end": 519, "text": "(Deng et al., 2003)", "ref_id": "BIBREF81" }, { "start": 574, "end": 587, "text": "(Furui, 1981)", "ref_id": "BIBREF83" }, { "start": 588, "end": 605, "text": "(Tu et al., 2009)", "ref_id": "BIBREF97" } ], "ref_spans": [], "eq_spans": [], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "(2)\u8a9e\u97f3\u6a21\u578b\u8abf\u9069\u6cd5(speech model adaptation) \u6b64\u985e\u7684\u65b9\u6cd5\u5247\u662f\u85c9\u7531\u5c11\u91cf\u7684\u61c9\u7528\u74b0\u5883\u8a9e\u6599\u6216\u96dc\u8a0a\uff0c\u4f86\u5c0d\u539f\u59cb\u7684\u8a9e\u97f3\u6a21\u578b\u4e2d\u7684\u7d71\u8a08\u53c3\u6578\u4f5c \u8abf\u6574\uff0c\u964d\u4f4e\u6a21\u578b\u4e4b\u8a13\u7df4\u74b0\u5883\u8207\u61c9\u7528\u74b0\u5883\u4e4b\u4e0d\u5339\u914d\u7684\u60c5\u6cc1\uff0c\u800c\u5b83\u7279\u9ede\u4e4b\u4e00\u662f\u5728\u65bc\u7121\u9700\u5c0d\u65bc \u5f85\u8fa8\u8b58\u7684\u8a9e\u97f3\u6216\u5176\u7279\u5fb5\u4f5c\u6d88\u566a\u7b49\u5f37\u5065\u7684\u8655\u7406\u3002\u8f03\u6709\u540d\u7684\u8a9e\u97f3\u6a21\u578b\u8abf\u9069\u6280\u8853\u5305\u542b\u4e86\uff1a\u6700\u5927 \u5f8c\u6a5f\u7387\u6cd5\u5247\u8abf\u9069\u6cd5(maximum a posteriori adaptation, MAP) (Gauiain & Lee, 1994) \u3001\u5e73 \u884c\u6a21\u578b\u5408\u4f75\u6cd5(parallel model combination, PMC) (Hung et al., 2001 )\u3001\u5411\u91cf\u6cf0\u52d2\u7d1a\u6578 \u8f49\u63db(vector Taylor series transform, VTS) (Moreno et al., 1996) \u8207\u6700\u5927\u76f8\u4f3c\u5ea6\u7dda\u6027\u56de\u6b78 \u6cd5\u8abf\u9069 (maximum likelihood linear regression, MLLR) (Leggetter & Woodland, 1995) \u7b49\u3002 ", "cite_spans": [ { "start": 200, "end": 221, "text": "(Gauiain & Lee, 1994)", "ref_id": "BIBREF85" }, { "start": 265, "end": 283, "text": "(Hung et al., 2001", "ref_id": "BIBREF92" }, { "start": 333, "end": 354, "text": "(Moreno et al., 1996)", "ref_id": "BIBREF94" }, { "start": 415, "end": 443, "text": "(Leggetter & Woodland, 1995)", "ref_id": "BIBREF93" } ], "ref_spans": [], "eq_spans": [], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "\u672c\u8ad6\u6587\u8f03\u96c6\u4e2d\u8a0e\u8ad6\u8207\u767c\u5c55\u7684\u662f\u4e0a\u8ff0\u7684\u7b2c\u4e00\u985e\u65b9\u6cd5\uff0c\u7c21\u55ae\u4f86\u8aaa\uff0c\u6211\u5011\u5c07\u63d0\u51fa\u4e00\u5957\u4f5c\u7528 \u65bc \u5012 \u983b \u8b5c \u6642 \u9593 \u5e8f \u5217 \u57df \u7684 \u5f37 \u5065 \u6027 \u6280 \u8853 \uff0c \u7a31 \u4f5c \u7dda \u6027 \u4f30 \u6e2c \u7de8 \u78bc \u6ffe \u6ce2 \u6cd5 (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "[ ] x n \u70ba\u5728\u7279\u5b9a\u6642\u9593 n \u6642\u7684\u8a0a\u865f\u503c\uff0c\u5176\u7531\u4f4d\u65bc\u6642\u9593\u8ef8\u4e0a n-1, n-2,..n-P \u9019\u4e32 P \u9ede\u7684\u8a0a\u865f\u503c\u4ee5 \u7dda\u6027\u7d44\u5408(\u52a0\u6b0a\u7d44\u5408)\u4f86\u8fd1\u4f3c\uff0c\u6bcf\u4e00\u9ede\u6240\u4f7f\u7528\u7684\u6b0a\u91cd\u4fc2\u6578(\u7a31\u505a\u7dda\u6027\u9810\u4f30\u4fc2\u6578)\u5206\u5225\u70ba (Hirsch & Pearce, 2000) \u8a9e\u97f3\u8cc7\u6599\u5eab\uff0c \u5167\u5bb9\u5305\u542b\u7f8e\u570b\u6210\u5e74\u7537\u5973\u4ee5\u4eba\u5de5\u65b9\u5f0f\u9304\u88fd\u7684\u4e00\u7cfb\u5217\u9023\u7e8c\u82f1\u6587\u6578\u5b57\u5b57\u4e32\uff0c\u5728\u6211\u5011\u6240\u63a1\u53d6\u7684\u4e7e \u6de8\u6a21\u5f0f\u8a13\u7df4\u3001\u591a\u5143\u96dc\u8a0a\u6a21\u5f0f\u6e2c\u8a66(clean-condition training, multi-condition testing)\u4e4b\u5be6\u9a57 \u67b6\u69cb\u4e2d\uff0c\u7528\u4ee5\u8a13\u7df4\u8072\u5b78\u6a21\u578b\u4e4b\u8a9e\u53e5\u70ba 8440 \u53e5\u4e7e\u6de8\u8a9e\u53e5\uff0c\u552f\u5176\u5305\u542b\u4e86 G.712 \u901a\u9053\u4e4b\u901a\u9053\u6548 \u61c9\u3002\u6e2c\u8a66\u8a9e\u6599\u5247\u5305\u542b\u4e86\u4e09\u500b\u5b50\u96c6\u5408\uff1aSets A \u8207 B \u7684\u8a9e\u53e5\u647b\u96dc\u4e86\u52a0\u6210\u6027\u96dc\u8a0a\uff0cSet C \u5247\u540c\u6642 \u5305\u542b\u52a0\u6210\u6027\u96dc\u8a0a\u8207\u647a\u7a4d\u6027\u96dc\u8a0a\uff0cSets A \u8207 B \u5404\u5305\u542b 28028 \u53e5\u8a9e\u97f3\uff0cSets C \u5305\u542b 14014 \u53e5 \u8a9e\u97f3\u3002\u52a0\u6210\u6027\u7684\u96dc\u8a0a\u7a2e\u985e\u5206\u5225\u70ba\uff1a\u5730\u4e0b\u9435(subway) \u3001\u4eba\u985e\u5608\u96dc\u8072(babble) \u3001\u6c7d\u8eca(car)\u3001 \u5c55\u89bd\u9928(exhibition)\u3001\u9910\u5ef3(restaurant)\u3001\u8857\u9053(street)\u3001\u6a5f\u5834(airport)\u3001\u706b\u8eca\u7ad9(train station) \u7b49\u96dc\u8a0a\uff0c\u4e26\u4ee5\u4e0d\u540c\u7a0b\u5ea6\u7684\u8a0a\u96dc\u6bd4 (signal-to-noise ratio, SNR) \u647b\u96dc\uff0c\u5206\u5225\u70ba\uff1aclean\u3001 20 dB\u300115 dB\u300110 dB\u30015 dB\u30010 dB \u8207-5 dB\uff1b\u800c\u901a\u9053\u6548\u61c9\u5206\u70ba G.712 \u8207 MIRS \u5169\u7a2e\u901a\u9053\u6a19 \u6e96\uff0c\u7531\u570b\u5bb6\u96fb\u4fe1\u806f\u76df (international telecommunication Union, ITU) (Hirsch & Pearce, 2000 ) \u6240\u8a02\u5b9a\u800c\u6210\u3002 \u4e0a \u8ff0 \u4e4b \u7528 \u4ee5 \u8a13 \u7df4 \u8207 \u6e2c \u8a66 \u7684 \u8a9e \u53e5 \uff0c \u6211 \u5011 \u5148 \u5c07 \u5176 \u8f49 \u63db \u6210 \u6885 \u723e \u5012 \u983b \u8b5c \u7279 \u5fb5 \u53c3 \u6578 (mel-frequency cepstral coefficients, MFCC)\uff0c\u4f5c\u70ba\u4e4b\u5f8c\u5404\u7a2e\u5f37\u5065\u6027\u65b9\u6cd5\u7684\u57fa\u790e\u7279\u5fb5 (baseline feature)\uff0c\u5efa\u69cb MFCC \u7279\u5fb5\u7684\u904e\u7a0b\u4e3b\u8981\u662f\u6839\u64da AURORA 2.0 (Hirsch & Pearce, 2000) ", "cite_spans": [ { "start": 98, "end": 121, "text": "(Hirsch & Pearce, 2000)", "ref_id": "BIBREF88" }, { "start": 690, "end": 712, "text": "(Hirsch & Pearce, 2000", "ref_id": "BIBREF88" }, { "start": 894, "end": 917, "text": "(Hirsch & Pearce, 2000)", "ref_id": "BIBREF88" } ], "ref_spans": [], "eq_spans": [], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a 1 , a 2 , \u2026, a P , \uf05b \uf05d e n \u5247\u70ba\u5728\u8fd1\u4f3c\u904e\u7a0b\u4e2d\u7684\u8aa4\u5dee\u8a0a\u865f\uff0c\u63db\u8a00\u4e4b\uff0c\u4e0b\u5f0f\u7684\u8a0a\u865f\u02c6[ ] x n \u70ba\u7dda\u6027\u4f30\u6e2c \u7684\u8a0a\u865f\uff1a \uf05b \uf05d 1 [ ] P k k x n a x n k \uf03d \uf03d \uf02d \uf0e5 (2) P \u7a31\u4f5c\u7dda\u6027\u4f30\u6e2c\u7684\u968e\u6578(order)\u3002\u800c\u5f0f(1)\u8207(2)\u4e2d [ ] x n \u8207\u02c6[ ] x n \u5169\u8005\u4e4b\u9593\u7684\u5dee\u91cf\u5c31\u662f \u7dda\u6027\u4f30\u6e2c\u7684\u8aa4\u5dee\uff0c \uf05b \uf05d \uf05b \uf05d \uf05b \uf05d e n x n x n \uf03d \uf02d (3) \u7531\u4e4b\u524d\u7684\u9673\u8ff0\uff0c\u53ef\u660e\u986f\u5f97\u77e5\uff0c\u7dda\u6027\u9810\u4f30\u7684\u904e\u7a0b\u662f\u5e0c\u671b\u539f\u59cb\u8a0a\u865f\u8207\u9810\u4f30\u8a0a\u865f\u4e4b\u9593\u7684\u8aa4 \u5dee\u8d8a\u5c0f\u8d8a\u597d\uff0c\u800c\u9810\u4f30\u8a0a\u865f\u8207\u539f\u59cb\u8a0a\u865f\u903c\u8fd1\u7684\u7a0b\u5ea6\uff0c\u6070\u662f\u7531\u7dda\u6027\u9810\u4f30\u4fc2\u6578{a k }\u6240\u6c7a\u5b9a\uff0c\u5728 \u6a19\u6e96\u4e4b\u7dda\u6027\u4f30\u6e2c\u7406\u8ad6\u4e2d\uff0c{a k }\u662f\u85c9\u7531\u6700\u5c0f\u5316\u5c07\u8aa4\u5dee\u8a0a\u865f \uf05b \uf05d e n \u7684\u6700\u5c0f\u5747\u65b9\u503c(mean squared value)\u6240\u6c7a\u5b9a\uff1a 1 1 2 2 0 0 1 1 1 [ ] ( [ ] [ ]) N N P k n n k E e n xn a xn k N N \uf02d \uf02d \uf03d \uf03d \uf03d \uf03d \uf03d \uf02d \uf02d \uf0e5 \uf0e5 \uf0e5 (4) \u5176\u4e2d\uff0cN \u70ba\u8a08\u7b97\u8aa4\u5dee\u4e4b\u8a0a\u865f\u9ede\u6578\uff0c\u5c07\u4e0a\u5f0f\u5c0d\u6bcf\u4e00\u500b\u7dda\u6027\u9810\u4f30\u4fc2\u6578 a k \u4f5c\u504f\u5fae\u5206\uff0c\u4e26\u5c07\u504f\u5fae \u5206\u5f8c\u7684\u7d50\u679c\u8a2d\u70ba 0\uff0c\u5c31\u53ef\u89e3\u51fa\u6bcf\u500b\u7dda\u6027\u9810\u4f30\u4fc2\u6578 a k \u7684\u6700\u4f73\u503c\uff0c\u6b65\u9a5f\u5982\u4e0b\uff1a 1 0 1 1 {2( [ ] [ ]) [ ]} 0 N P k l n k E x n a x n k x n l a N \uf02d \uf03d \uf03d \uf0b6 \uf03d \uf02d \uf02d \uf02d \uf03d \uf0b6 \uf0e5 \uf0e5 (5) \u6574\u7406(5)\uff0c\u53ef\u5f97\u51fa\uff1a 1 [ ] [ ] 0, 1, 2,..., P x kx k r l a r l k l P \uf03d \uf02d \uf02d \uf03d \uf03d \uf0e5 (6) \u5176\u4e2d\uff0c [ ] x r l \u70ba [ ] x n \u7684\u81ea\u76f8\u95dc\u4fc2\u6578\uff0c\u5b9a\u7fa9\u70ba\uff1a [ ] [ ] [ ] x n r l x n x n l \uf03d \uf02d \uf0e5", "eq_num": "(7)" } ], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf02d \uf0e9 \uf0f9 \uf0e9 \uf0f9 \uf0e9 \uf0f9 \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf02d \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf03d \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf02d \uf02d \uf0eb \uf0fb \uf0eb \uf0fb \uf0eb \uf0fb \uf04c \uf04c \uf04d \uf04d \uf04f \uf04d \uf04d \uf04d \uf04c (8) \u4e5f\u53ef\u4ee5\u8868\u793a\u6210\uff1a x x \uf03d R a r ,", "eq_num": "(9)" } ], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176\u4e2d\uff0c x R \u70ba\u81ea\u76f8\u95dc\u51fd\u6578\u77e9\u9663(autocorrelation function matrix)\uff1a [0] [1] [ 1] [1] [0] [ 2] [ 1] [ 2] [0] x x x x x x x x x x r r r P r r r P r P r P r \uf02d \uf0e9 \uf0f9 \uf0ea \uf0fa \uf02d \uf0ea \uf0fa \uf03d \uf0ea \uf0fa \uf0ea \uf0fa \uf02d \uf02d \uf0eb \uf0fb R \uf04c \uf04c \uf04d \uf04d \uf04f \uf04d \uf04c \uf05b \uf05d 1 2 T P a a a \uf03d a \uf04c \uff0c\u70ba\u4ee3\u8868\u7dda\u6027\u9810\u4f30\u53c3\u6578 k a \u7684\u5411\u91cf\u3002 \u6839\u64da\u4e0a\u5f0f\uff0c\u7dda\u6027\u9810\u4f30\u4fc2\u6578\u5411\u91cf\u53ef\u76f4\u63a5\u7531\u4e0b\u5f0f\u6c42\u5f97\uff1a 1 x x \uf02d \uf03d a R r ,", "eq_num": "(10)" } ], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u800c\u7531\u65bc\u4e0a\u5f0f\u9700\u4f7f\u7528\u5230\u53cd\u77e9\u9663\u904b\u7b97\uff0c\u8907\u96dc\u5ea6\u8f03\u9ad8\uff0c\u5be6\u969b\u4f5c\u6cd5\u4e0a\uff0c\u85c9\u7531\u77e9\u9663 x R \u7684\u7279\u6b8a\u6027\u8cea\uff0c \u6709\u4e00\u9ad8\u6548\u7387\u7684\u6f14\u7b97\u6cd5\uff0c\u7a31\u4f5c L-D \u905e\u8ff4\u6cd5(Levinson-Durbin recursion)(\u738b\u5c0f\u5ddd\uff0c2004) \u4f86\u6c42\u53d6\u7dda\u6027\u9810\u4f30\u4fc2\u6578\u5411\u91cf a \u3002 \u4e00\u822c\u800c\u8a00\uff0c\u82e5\u8981\u5f97\u5230 1 x R \uf02d \uff0c\u50b3\u7d71\u7684\u4f5c\u6cd5\u662f\u5229\u7528\u9ad8\u65af\u6d88\u53bb\u6cd5\u53bb\u6c42\u89e3\uff0c\u4f46\u662f\u9ad8\u65af\u5bc6\u5ea6\u7684 \u8907\u96dc\u5ea6\u662f 3 ( ) O n \uff0cn \u6240\u4ee3\u8868\u7684\u662f\u672a\u77e5\u6578\u7684\u7e3d\u6578\uff0c\u800c L-D \u905e\u8ff4\u6cd5 (Levinson-Durbin recursion) \u7684\u4f5c\u6cd5\u662f\u5229\u7528 Toeplitz \u77e9\u9663\u7684\u7279\u6027\uff0c\u4f7f\u5f97\u5c0d\u89d2\u7dda\u90fd\u662f\u5c0d\u7a31\u7684\u60c5\u6cc1\u4e0b\uff0c\u53bb\u6c42\u53d6\u6bcf\u4e00\u500b\u7dda\u6027 \u4f30\u6e2c\u4fc2\u6578\uff0c\u800c\u4e14\u8907\u96dc\u5ea6\u662f 2 ( ) O n \uff0c\u6bd4\u8d77\u9ad8\u65af\u6d88\u53bb\u6cd5\u9084\u8981\u4f86\u7684\u5feb\u901f\u6709\u6548\u3002 2.2 LPC\u6280\u8853\u904b\u7528\u65bc\u7279\u5fb5\u6642\u9593\u5e8f\u5217\u57df\u4e4b\u8655\u7406 \u5f9e\u524d\u4e00\u5c0f\u7bc0\uff0c\u6211\u5011\u53ef\u4ee5\u6e05\u695a\u4e86\u89e3 LPC \u7684\u539f\u7406\u5728\u6642\u57df\u4e0a\u63a8\u7406\u904e\u7a0b\uff0c\u5728\u672c\u7bc0\u4e2d\uff0c\u6211\u5011\u5c07\u4ecb\u7d39 \u672c\u8ad6\u6587\u4e3b\u8981\u63d0\u51fa\u7684\u65b0\u65b9\u6cd5\uff1a\u5373\u628a LPC \u5206\u6790\u6280\u8853\u61c9\u7528\u65bc\u8a9e\u97f3\u7279\u5fb5\u6642\u9593\u5e8f\u5217\u4e0a\uff0c\u5617\u8a66\u6c42\u53d6\u65b0 \u7684\u8a9e\u97f3\u7279\u5fb5\u5e8f\u5217\uff0c\u4f7f\u5176\u76f8\u5c0d\u65bc\u539f\u59cb\u7279\u5fb5\u5e8f\u5217\u66f4\u5177\u6709\u96dc\u8a0a\u5f37\u5065\u6027\u3002 \u5728\u6211\u5011\u63d0\u51fa\u7684\u65b0\u65b9\u6cd5\u4e2d\uff0c\u7576\u539f\u59cb\u8a9e\u97f3\u7279\u5fb5\u6642\u9593\u5e8f\u5217\u4ee5 [ ] x n \u8868\u793a\u6642\uff0c\u65b0\u7279\u5fb5\u6642\u9593\u5e8f\u5217 \u70ba [ ] x n \u7d93\u904e LPC \u5206\u6790\u6240\u5f97\u5230\u7684\u7dda\u6027\u9810\u4f30\u5e8f\u5217\u02c6[ ] x n \uff0c\u5177\u9ad4\u7684\u6b65\u9a5f\u5982\u4e0b\uff1a \u6b65\u9a5f\u4e00\uff1a\u5c07\u539f\u59cb\u8a9e\u97f3\u7279\u5fb5\u6642\u9593\u5e8f\u5217\u4ee5 [ ] x n \u4f5c P \u968e\u4e4b\u7dda\u6027\u4f30\u6e2c\uff0c\u5982\u5f0f(2)\uff0c\u6c42\u53d6\u6700\u4f73\u4e4b\u7dda\u6027 \u4f30\u6e2c\u4fc2\u6578\uf07b \uf07d ,1 k a k P \uf0a3 \uf0a3 \u3002 \u6b65\u9a5f\u4e8c\uff1a\u7d93\u7531\u4e0b\u5f0f\u6c42\u5f97\u65b0\u7684\u7279\u5fb5\u6642\u9593\u5e8f\u5217\uff1a 1 [ ] [ ] P k k x n a xn k \uf03d \uf03d \uf02d \uf0e5", "eq_num": "(11" } ], "section": "\u7dd2\u8ad6 \u672c\u8ad6\u6587\u662f\u63a2\u8a0e\u8207\u767c\u5c55\u964d\u4f4e\u5404\u7a2e\u5916\u5728\u74b0\u5883\u5b58\u5728\u4e4b\u96dc\u8a0a\u5e72\u64fe\u6240\u5c0d\u61c9\u7684\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u3002\u5728\u8fd1\u5e7e \u5341\u5e74\u4f86\uff0c\u7121\u6578\u7684\u5b78\u8005\u5148\u9032\u5c0d\u65bc\u6b64\u96dc\u8a0a\u5e72\u64fe\u554f\u984c\u63d0\u51fa\u4e86\u8c50\u5bcc\u773e\u591a\u7684\u6f14\u7b97\u6cd5\uff0c\u4e5f\u90fd\u53ef\u5c0d\u96dc\u8a0a \u74b0\u5883\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u6548\u80fd\u6709\u6240\u6539\u9032\uff0c\u6211\u5011\u628a\u9019\u4e9b\u65b9\u6cd5\u7565\u5206\u6210\u5169\u5927\u7bc4\u7587\uff1a", "sec_num": "1." }, { "text": "Chinese Dictionary http://www.edu.tw/files/site_content/m0001/pin/biau2.htm?open 2 Chinese Idioms http://dict.idioms.moe.edu.tw/cydic/index.htm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Jian-chengWu et al.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our system DeeD (Don'ts-to-Do's English-English Decoder) was designed to correct preposition-verb serial errors in a given sentence written by language learners. Nevertheless, since large-scale learner corpora annotated with errors are not widely available, we have resorted to Web scale n-grams to train our system, while using a small annotated learner corpus to evaluate its performance. In this section, we first present the details of training DeeD for the evaluation (Section 4.1). Then, Section 4.2 lists the grammar checking systems that we used in our evaluation and comparison. Section 4.3 introduces the evaluation metrics for the performance of the systems, and details of the sentences evaluated and performance judgments are reported in Section 4.4.4.1 Training DeeDWe used the Web 1T 5-grams(Brants & Franz, 2006) to train our systems. Web 1T 5-grams is a collection that contains 1 to 5 grams calculated from a 1 trillion words of public Web pages provided by Google through the Linguistic Data Consortium (LDC). There are some ten", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Only NNCs within the scope of this paper are listed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "FrameNet sometimes has FEs that we consider the simple type as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In part because of limited space and in part for demonstrative purpose only, we did not list examples of two of the nine productive semantic categories, \"vehicle\" and \"container,\" neither did we exhaust all the instances of the other seven categories.5 The N1s here can be seen as either spatial (1a) or an important attribute of PEOPLE (1c).6 Sometimes called \"Type\" in FrameNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\uf02a \u570b\u7acb\u66a8\u5357\u570b\u969b\u5927\u5b78\u96fb\u6a5f\u5de5\u7a0b\u5b78\u7cfb Department of Electrical Engineering, National Chi Nan University E-mail: { s99323904; s100323553}@mail1.ncnu.edu.tw; jwhung@ncnu.edu.tw", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the National Science Council under a Center Excellence Grant NSC 99-2221-E-001-014-MY3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null }, { "text": "Appendix A: Examples 4 of mappings of N2-based NNC categories to FrameNet's entity and event frames (To avoid visual cluster, subclasses of simple relations are indicated as numbered in Section 3) Simple_(1c/1a 5 ) Telic/Use 6 + Clothing Computational Linguistics and Chinese Language Processing Vol. 18, No. 4, December 2013, pp. 81- ", "cite_spans": [ { "start": 186, "end": 196, "text": "Section 3)", "ref_id": null }, { "start": 211, "end": 214, "text": "5 )", "ref_id": "BIBREF121" }, { "start": 225, "end": 334, "text": "6 + Clothing Computational Linguistics and Chinese Language Processing Vol. 18, No. 4, December 2013, pp. 81-", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "This index covers all technical items---papers, correspondence, reviews, etc.---that appeared in this periodical during 2013.The Author Index contains the primary entry for each item, listed under the first author's name. The primary entry includes the coauthors' names, the title of paper or other item, and its location, specified by the publication volume, number, and inclusive pages. The Subject Index contains entries describing the item under all appropriate subject headings, plus the first author's name, the publication volume, number, and inclusive pages. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vol. 18", "sec_num": null }, { "text": "Please send application to:The ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To Register\uff1a", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A new approach for automatic Chinese spelling correction", "authors": [ { "first": "C.-H", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1995, "venue": "Proceedings of Natural Language Processing Pacific Rim Symposium", "volume": "", "issue": "", "pages": "278--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C.-H. (1995). A new approach for automatic Chinese spelling correction. In Proceedings of Natural Language Processing Pacific Rim Symposium, 278 -283.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improve the detection of improperly used Chinese characters with noisy channel model and detection template", "authors": [ { "first": "Y.-Z", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Y.-Z. (2010). Improve the detection of improperly used Chinese characters with noisy channel model and detection template. Master thesis, Chaoyang University of Technology.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Segmentation standard for Chinese natural language processing", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "L.-L", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 1996 International Conference on Computational Linguistics (COLING 96)", "volume": "2", "issue": "", "pages": "1045--1048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R., Chen, K.-j., & Chang, L.-L. (1996). Segmentation standard for Chinese natural language processing. In Proceedings of the 1996 International Conference on Computational Linguistics (COLING 96), 2, 1045 -1048.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Error detection and correction based on Chinese phonemic alphabet in Chinese text", "authors": [ { "first": "C.-M", "middle": [], "last": "Huang", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "Wu", "suffix": "" }, { "first": "C.-C", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Conference on Modeling Decisions for Artificial Intelligence (MDAI IV)", "volume": "", "issue": "", "pages": "463--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-M., Wu, M.-C., & Chang C.-C. (2007). Error detection and correction based on Chinese phonemic alphabet in Chinese text. In Proceedings of the 4th International Conference on Modeling Decisions for Artificial Intelligence (MDAI IV), 463 -476.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic Chinese character error detecting system based on n-gram language model and pragmatics knowledge base", "authors": [ { "first": "T.-H", "middle": [], "last": "Hung", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hung, T.-H. (2009). Automatic Chinese character error detecting system based on n-gram language model and pragmatics knowledge base. Master thesis, Chaoyang University of Technology.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A rule based Chinese spelling and grammar detection system utility", "authors": [ { "first": "Y", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2012, "venue": "International Conference on System Science and Engineering (ICSSE)", "volume": "", "issue": "", "pages": "437--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang, Y., et al. (2012). A rule based Chinese spelling and grammar detection system utility. 2012 International Conference on System Science and Engineering (ICSSE), 437 -440, 30 June -2 July 2012.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical Machine Translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P. (2010). Statistical Machine Translation. United Kingdom: Cambridge University Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Phonological and logographic influences on errors in written Chinese words", "authors": [ { "first": "C.-L", "middle": [], "last": "Liu", "suffix": "" }, { "first": "K.-W", "middle": [], "last": "Tien", "suffix": "" }, { "first": "M.-H", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Y.-H", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Seventh Workshop on Asian Language Resources (ALR7), the Forty Seventh Annual Meeting of the Association for Computational Linguistics (ACL'09", "volume": "", "issue": "", "pages": "84--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, C.-L., Tien, K.-W., Lai, M.-H., Chuang, Y.-H., & Wu, S.-H. (2009). Phonological and logographic influences on errors in written Chinese words. In Proceedings of the Seventh Workshop on Asian Language Resources (ALR7), the Forty Seventh Annual Meeting of the Association for Computational Linguistics (ACL'09), 84 -91.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Visually and phonologically similar characters in incorrect Chinese words: Analyses, identification, and applications", "authors": [ { "first": "C.-L", "middle": [], "last": "Liu", "suffix": "" }, { "first": "M.-H", "middle": [], "last": "Lai", "suffix": "" }, { "first": "K.-W", "middle": [], "last": "Tien", "suffix": "" }, { "first": "Y.-H", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "C.-Y", "middle": [], "last": "&lee", "suffix": "" } ], "year": 2011, "venue": "ACM Trans. Asian Lang, Inform. Process", "volume": "10", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, C.-L., Lai, M.-H., Tien, K.-W., Chuang, Y.-H., Wu, S.-H., &Lee, C.-Y. (2011). Visually and phonologically similar characters in incorrect Chinese words: Analyses, identification, and applications. ACM Trans. Asian Lang, Inform. Process, 10(2), Article 10, pages 39, .", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Introduction to CKIP Chinese word segmentation system for the first international Chinese Word Segmentation Bakeoff", "authors": [ { "first": "W.-Y", "middle": [], "last": "Ma", "suffix": "" }, { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL, Second SIGHAN Workshop on Chinese Language Processing", "volume": "17", "issue": "", "pages": "168--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ma, W.-Y., & Chen, K.-J. (2003). Introduction to CKIP Chinese word segmentation system for the first international Chinese Word Segmentation Bakeoff. In Proceedings of ACL, Second SIGHAN Workshop on Chinese Language Processing, 17, 168 -171.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "MOE word frequency table", "authors": [ { "first": "Moe", "middle": [], "last": "", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MOE. (1997). MOE word frequency table, Taiwan: Ministry of Education.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "MOE Dictionary new edition. Taiwan: Ministry of Education", "authors": [ { "first": "Moe", "middle": [], "last": "", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MOE. (2007). MOE Dictionary new edition. Taiwan: Ministry of Education.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Common errors in Chinese writings. Taiwan: Ministry of Education", "authors": [ { "first": "Moe", "middle": [], "last": "", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MOE. (1996). Common errors in Chinese writings. Taiwan: Ministry of Education.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J., & Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19 -51.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A hybrid approach to automatic Chinese text checking and error correction", "authors": [ { "first": "F", "middle": [], "last": "Ren", "suffix": "" }, { "first": "H", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Q", "middle": [], "last": "Zhou", "suffix": "" } ], "year": null, "venue": "Integrating Dictionary and Web N-grams for Chinese Spell Checking", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ren, F., Shi, H., & Zhou, Q. (2001). A hybrid approach to automatic Chinese text checking and error correction. 2001 IEEE International Conference on Systems, Man, and Cybernetics, 3, 1693 -1698, 07 -10 Oct. 2001. Integrating Dictionary and Web N-grams for Chinese Spell Checking 29", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "SRILM at Sixteen: Update and Outlook", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "J", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "V", "middle": [], "last": "Abrash", "suffix": "" } ], "year": 2011, "venue": "Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A., Zheng, J., Wang, W., & Abrash, V. (2011). SRILM at Sixteen: Update and Outlook. In Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop, Dec. 2011.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Reducing the false alarm rate of Chinese character error detection and correction", "authors": [ { "first": "S.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Y.-X", "middle": [], "last": "Chen", "suffix": "" }, { "first": "P.-C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [], "last": "Ku", "suffix": "" }, { "first": "C.-L", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing", "volume": "", "issue": "", "pages": "28--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, S.-H., Chen, Y.-X., Yang, P.-c., Ku, T., & Liu, C.-L. (2010). Reducing the false alarm rate of Chinese character error detection and correction. In Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing (CLP 2010), 54 -61, 28 -29 Aug. 2010.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Chinese OCR spelling check approach based on statistical language models", "authors": [ { "first": "L", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "T", "middle": [], "last": "Bao", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "C", "middle": [], "last": "Wang", "suffix": "" }, { "first": "S", "middle": [], "last": "Naoi", "suffix": "" } ], "year": 2004, "venue": "IEEE International Conference on Systems, Man and Cybernetics", "volume": "5", "issue": "", "pages": "4727--4732", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhuang, L., Bao, T., Zhu, X., Wang, C., & Naoi, S. (2004). A Chinese OCR spelling check approach based on statistical language models. 2004 IEEE International Conference on Systems, Man and Cybernetics, 5, 4727 -4732, 10 -13 Oct. 2004.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "How to detect grammatical errors in a text without parsing it", "authors": [ { "first": "E", "middle": [ "S" ], "last": "Atwell", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the Third Conference of the European Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atwell, E. S. (1987). How to detect grammatical errors in a text without parsing it. In Proceedings of the Third Conference of the European Association for Computational Linguistics (EACL), 38-45, Copenhagen.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Arboretum: Using a precision grammar for grammar checking in CALL", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "S", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Integrating Speech Tech-nology in Learning/Intelligent Computer Assisted Language Learning Correcting Serial Grammatical Errors based on N-grams and Syntax 43 (inSTIL/ICALL) Symposium: NLP and Speech Technologies in Advanced Language Learning Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bender, E. M., Flickinger, D., Oepen, S., & Baldwin, T. (2004). Arboretum: Using a precision grammar for grammar checking in CALL. In Proceedings of the Integrating Speech Tech-nology in Learning/Intelligent Computer Assisted Language Learning Correcting Serial Grammatical Errors based on N-grams and Syntax 43 (inSTIL/ICALL) Symposium: NLP and Speech Technologies in Advanced Language Learning Systems, Venice.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The Google Web 1T 5-gram corpus version 1.1. LDC2006T13", "authors": [ { "first": "T", "middle": [], "last": "Brants", "suffix": "" }, { "first": "A", "middle": [], "last": "Franz", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brants, T., & Franz, A. (2006). The Google Web 1T 5-gram corpus version 1.1. LDC2006T13.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A new academic word list", "authors": [ { "first": "A", "middle": [], "last": "Coxhead", "suffix": "" } ], "year": 2000, "venue": "TESOL quarterly", "volume": "34", "issue": "2", "pages": "213--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coxhead, A. (2000). A new academic word list. TESOL quarterly, 34(2), 213-238.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic detection of preposition errors in learner writing", "authors": [ { "first": "R", "middle": [], "last": "De Felice", "suffix": "" }, { "first": "S", "middle": [ "G" ], "last": "Pulman", "suffix": "" } ], "year": 2009, "venue": "CALICO Journal", "volume": "26", "issue": "3", "pages": "512--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "De Felice, R., & Pulman, S. G. (2009). Automatic detection of preposition errors in learner writing. CALICO Journal, 26(3), 512-528.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic grammar checking for second language learners -the use of prepositions", "authors": [ { "first": "E", "middle": [], "last": "Eeg-Olofsson", "suffix": "" }, { "first": "O", "middle": [], "last": "Knuttson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 14th Nordic Conference in Computational Linguistics (NoDaLiDa)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eeg-Olofsson, E., & Knuttson, O. (2003). Automatic grammar checking for second language learners -the use of prepositions. In Proceedings of the 14th Nordic Conference in Computational Linguistics (NoDaLiDa).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Using mostly native data to correct errors in learners' writing", "authors": [ { "first": "M", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Eleventh Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gamon, M. (2010). Using mostly native data to correct errors in learners' writing. In Proceedings of the Eleventh Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Los Angeles.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Using contextual speller techniques and language modeling for ESL error correction", "authors": [ { "first": "M", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "A", "middle": [], "last": "Klementiev", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Dolan", "suffix": "" }, { "first": "D", "middle": [], "last": "Be-Lenko", "suffix": "" }, { "first": "L", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP)", "volume": "", "issue": "", "pages": "449--456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gamon, M., Gao, J., Brockett, C., Klementiev, A., Dolan, W. B., Be-lenko, D., & Vanderwende, L. (2008). Using contextual speller techniques and language modeling for ESL error correction. In Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP), 449-456, Hyderabad, India.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "English next: Why global English may mean the end of 'English as a Foreign Language", "authors": [ { "first": "D", "middle": [], "last": "Graddol", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graddol, D. (2006). English next: Why global English may mean the end of 'English as a Foreign Language.' UK: British Council.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Using error-annotated ESL data to develop an ESL error correction system", "authors": [ { "first": "N.-R", "middle": [], "last": "Han", "suffix": "" }, { "first": "J", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "J.-Y", "middle": [], "last": "Ha", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han, N.-R., Tetreault, J., Lee, S.-H., & Ha, J.-Y. (2010). Using error-annotated ESL data to develop an ESL error correction system. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC), Malta.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Intelligent writing assistance", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Heidorn", "suffix": "" } ], "year": 2000, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "181--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heidorn, G. E. (2000). Intelligent writing assistance. In R. Dale, H. Moisl, and H. Somers, editors, Handbook of Natural Language Processing, 181-207. Marcel Dekker, New York.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Automatic error detection in the Japanese learners' English spoken data", "authors": [ { "first": "E", "middle": [], "last": "Izumi", "suffix": "" }, { "first": "K", "middle": [], "last": "Uchimoto", "suffix": "" }, { "first": "T", "middle": [], "last": "Saiga", "suffix": "" }, { "first": "T", "middle": [], "last": "Supnithi", "suffix": "" }, { "first": "H", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2003, "venue": "Companion Volume to the Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "145--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Izumi, E., Uchimoto, K., Saiga, T., Supnithi, T., & Isahara, H. (2003). Automatic error detection in the Japanese learners' English spoken data. In Companion Volume to the Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), 145-148.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Automated postediting of documents", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "I", "middle": [], "last": "Chander", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "779--784", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, K., & Chander, I. (1994). Automated postediting of documents. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI), 779-784, Seattle.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automated grammatical error detection for language learners", "authors": [ { "first": "C", "middle": [], "last": "Leacock", "suffix": "" } ], "year": 2010, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "3", "issue": "1", "pages": "1--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leacock, C. et al. 2010. Automated grammatical error detection for language learners. Synthesis Lectures on Human Language Technologies, 3(1), 1-134.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Automatic grammar correction for second-language learners", "authors": [ { "first": "J", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Seneff", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Ninth International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "1978--1981", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, J., & Seneff, S. (2006). Automatic grammar correction for second-language learners. In Proceedings of the Ninth International Conference on Spoken Language Processing (Interspeech), 1978-1981.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Human evaluation of article and noun number usage: Influences of context and construction variability", "authors": [ { "first": "J", "middle": [], "last": "Lee", "suffix": "" }, { "first": "J", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "M", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Third Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "60--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, J., Tetreault, J., & Chodorow, M. (2009b). Human evaluation of article and noun number usage: Influences of context and construction variability. In Proceedings of the Third Linguistic Annotation Workshop (LAW), 60-63, Suntec, Singapore.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Analysis of Chinese morphemes and its application to sense and part-of-speech prediction for Chinese compounds", "authors": [ { "first": "Y", "middle": [ "S" ], "last": "Chung", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" } ], "year": 2010, "venue": "ICCPOL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chung, Y. S., & Chen, K. J. (2010). Analysis of Chinese morphemes and its application to sense and part-of-speech prediction for Chinese compounds. ICCPOL 2010, California, USA, 2010.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Integrating symbolic and statistical representations: The lexicon pragmatics interface", "authors": [ { "first": "A", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "A", "middle": [], "last": "Lascardies", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Annual Meeting of the ACL and 8th Conference of the EACL (ACL-EACL'97)", "volume": "", "issue": "", "pages": "136--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Copestake, A., & Lascardies, A. (1997). Integrating symbolic and statistical representations: The lexicon pragmatics interface. In Proceedings of the 35th Annual Meeting of the ACL and 8th Conference of the EACL (ACL-EACL'97), Madrid, 1997, 136-43.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Strategies in comprehending Mandarin Chinese noun-noun compounds with animals, plants, and artifacts as constituents", "authors": [ { "first": "H", "middle": [ "J" ], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, H. J. (2008). Strategies in comprehending Mandarin Chinese noun-noun compounds with animals, plants, and artifacts as constituents. MA thesis. National Cheng-Kung University, 2008. A Semantic-Based Approach to Noun-Noun Compound Interpretation 59", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "The Interpretation of English Noun Sequences on the Computer", "authors": [ { "first": "R", "middle": [], "last": "Leonard", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonard, R. (1984). The Interpretation of English Noun Sequences on the Computer, Amsterdam: North-Holland.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The Syntax and Semantics of Complex Nominals", "authors": [ { "first": "J", "middle": [ "N" ], "last": "Levi", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levi, J. N. (1978). The Syntax and Semantics of Complex Nominals. New York: Academic Press.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Xiandai Hanyu Shuxing Fanchou Yianjiu (\u73fe\u4ee3\u6f22\u8a9e\u5c6c\u6027\u7bc4\u7587\u7814\u7a76)", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Liu", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, C. H. (2008). Xiandai Hanyu Shuxing Fanchou Yianjiu (\u73fe\u4ee3\u6f22\u8a9e\u5c6c\u6027\u7bc4\u7587\u7814\u7a76). Chengdu: Bashu Books.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Ordered Chaos: The Interpretation of English Noun-Noun Compounds", "authors": [ { "first": "M", "middle": [ "E" ], "last": "Ryder", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryder, M. E. (1994). Ordered Chaos: The Interpretation of English Noun-Noun Compounds, University of California Press, Berkeley, CA.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Where does the meaning of compounds and possessives come from? A contrastive view", "authors": [ { "first": "A", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2005, "venue": "The 3rd International Conference in Contrastive Semantics and Pragmatics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00f8gaard, A. (2005). Where does the meaning of compounds and possessives come from? A contrastive view. The 3rd International Conference in Contrastive Semantics and Pragmatics, Shanghai, China.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Mandarin Singing Voice Synthesis Using an HNM Based Scheme", "authors": [ { "first": "H.-Y", "middle": [], "last": "Gu", "suffix": "" }, { "first": "H.-L", "middle": [], "last": "Liau", "suffix": "" } ], "year": 2008, "venue": "International Congress on Image and Signal Processing", "volume": "", "issue": "", "pages": "347--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu, H.-Y., & Liau, H.-L. (2008). Mandarin Singing Voice Synthesis Using an HNM Based Scheme. International Congress on Image and Signal Processing (CISP), 347-351.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Exploiting Prosody Hierarchy and Dynamic Features for Pitch Modeling and Generation in HMM-Based Speech Synthesis", "authors": [ { "first": "C.-C", "middle": [], "last": "Hsia", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "J.-Y", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "18", "issue": "8", "pages": "1994--2003", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsia, C.-C., Wu, C.-H., & Wu, J.-Y. (2010). Exploiting Prosody Hierarchy and Dynamic Features for Pitch Modeling and Generation in HMM-Based Speech Synthesis. IEEE Transactions on Audio, Speech, and Language Processing, 18(8), 1994-2003.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Personalized Spectral and Prosody Conversion using Frame-Based Codeword Distribution and Adaptive CRF", "authors": [ { "first": "Y.-C", "middle": [], "last": "Huang", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Y.-T", "middle": [], "last": "Chao", "suffix": "" } ], "year": 2013, "venue": "Speech, and Language Processing", "volume": "21", "issue": "", "pages": "51--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Y.-C., Wu, C.-H., & Chao, Y.-T. (2013). Personalized Spectral and Prosody Conversion using Frame-Based Codeword Distribution and Adaptive CRF. IEEE Trans. Audio, Speech, and Language Processing, 21(1), 51-62.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Hierarchical prosodic pattern selection based on Fujisaki model for natural mandarin speech synthesis", "authors": [ { "first": "Y.-C", "middle": [], "last": "Huang", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "S.-T", "middle": [], "last": "Weng", "suffix": "" } ], "year": 2012, "venue": "8th International Symposium on Chinese Spoken Language Processing", "volume": "", "issue": "", "pages": "79--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Y.-C., Wu, C.-H., & Weng, S.-T. (2012). Hierarchical prosodic pattern selection based on Fujisaki model for natural mandarin speech synthesis. 2012 8th International Symposium on Chinese Spoken Language Processing (ISCSLP), 79-83.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Segmental tonal modeling for phone set design in Mandarin LVCSR", "authors": [ { "first": "C", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Shi", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "M", "middle": [], "last": "Chu", "suffix": "" }, { "first": "T", "middle": [], "last": "Wang", "suffix": "" }, { "first": "E", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ICASSP 04", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C., Shi, Y., Zhou, J., Chu, M., Wang, T., & Chang, E. (2004). Segmental tonal modeling for phone set design in Mandarin LVCSR. Proceedings of ICASSP 04, 901-904.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "STRAIGHT, exploitation of the other aspect of VOCODER: Perceptually isomorphic decomposition of speech sounds", "authors": [ { "first": "H", "middle": [], "last": "Kawahara", "suffix": "" } ], "year": 2006, "venue": "Acoustical Science and Technology", "volume": "27", "issue": "6", "pages": "349--353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kawahara, H. (2006). STRAIGHT, exploitation of the other aspect of VOCODER: Perceptually isomorphic decomposition of speech sounds. Acoustical Science and Technology, 27(6), 349-353.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "VOCALOID-Commercial singing synthesizer based on sample concatenation", "authors": [ { "first": "H", "middle": [], "last": "Kenmochi", "suffix": "" }, { "first": "H", "middle": [], "last": "Ohshita", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "4009--4010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenmochi, H., & Ohshita, H. (2007). VOCALOID-Commercial singing synthesizer based on sample concatenation. INTERSPEECH 2007, 4009-4010.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Singing Voice Analysis/Synthesis", "authors": [ { "first": "Y", "middle": [ "E" ], "last": "Kim", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, Y. E. (2003). Singing Voice Analysis/Synthesis. Ph.D. dissertation, Massachusetts Institute of Technology.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A Lyrics to Singing Voice Synthesis System with Variable Timbre. Applied Informatics and Communication Communications in Computer and Information Science", "authors": [ { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "H", "middle": [], "last": "Yang", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "L", "middle": [], "last": "Cai", "suffix": "" } ], "year": 2011, "venue": "", "volume": "225", "issue": "", "pages": "186--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, J., Yang, H., Zhang, W., & Cai, L. (2011). A Lyrics to Singing Voice Synthesis System with Variable Timbre. Applied Informatics and Communication Communications in Computer and Information Science, 225, 186-193.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Phonetic Tutorials", "authors": [ { "first": "T", "middle": [], "last": "Lin", "suffix": "" }, { "first": "L.-J", "middle": [], "last": "Wang", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "103--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, T., & Wang, L.-J. (1992). Phonetic Tutorials. Beijing University Press, 103-121.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "The USTC System for Blizzard Challenge", "authors": [ { "first": "Z.-H", "middle": [], "last": "Ling", "suffix": "" }, { "first": "X.-J", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Y", "middle": [], "last": "Song", "suffix": "" }, { "first": "C.-Y", "middle": [], "last": "Yang", "suffix": "" }, { "first": "L.-H", "middle": [], "last": "Chen", "suffix": "" }, { "first": "L.-R", "middle": [], "last": "Dai", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ling, Z.-H., Xia, X.-J., Song, Y., Yang, C.-Y., Chen, L.-H., & Dai, L.-R. (2012). The USTC System for Blizzard Challenge 2012. Blizzard Challenge Workshop.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Recent Development of the HMM-bases Singing Voice Synthesis System-Sinsy", "authors": [ { "first": "K", "middle": [], "last": "Oura", "suffix": "" }, { "first": "A", "middle": [], "last": "Mase", "suffix": "" }, { "first": "T", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "S", "middle": [], "last": "Muto", "suffix": "" }, { "first": "Y", "middle": [], "last": "Nankaku", "suffix": "" }, { "first": "K", "middle": [], "last": "Tokuda", "suffix": "" } ], "year": 2010, "venue": "The 7th ISCA Speech Synthesis Workshop", "volume": "", "issue": "", "pages": "211--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oura, K., Mase, A., Yamada, T., Muto, S., Nankaku, Y., & Tokuda, K. (2010). Recent Development of the HMM-bases Singing Voice Synthesis System-Sinsy. The 7th ISCA Speech Synthesis Workshop, 211-216.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "An HMM-based Singing Voice Synthesis System. International Conference on Spoken Language Processing", "authors": [ { "first": "K", "middle": [], "last": "Saino", "suffix": "" }, { "first": "H", "middle": [], "last": "Zen", "suffix": "" }, { "first": "Y", "middle": [], "last": "Nankaku", "suffix": "" }, { "first": "A", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Tokuda", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "1141--1144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saino, K., Zen, H., Nankaku, Y., Lee, A., & Tokuda, K. (2006). An HMM-based Singing Voice Synthesis System. International Conference on Spoken Language Processing (ICSLP), 1141-1144.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Speech-to-Singing Synthesis: Converting Speaking Voices to Singing Voices by Controlling Acoustic Features Unique to Singing Voices", "authors": [ { "first": "T", "middle": [], "last": "Saitou", "suffix": "" }, { "first": "M", "middle": [], "last": "Goto", "suffix": "" }, { "first": "M", "middle": [], "last": "Unoki", "suffix": "" }, { "first": "M", "middle": [], "last": "Akagi", "suffix": "" } ], "year": 2007, "venue": "Applications of Signal Processing to Audio and Acoustics Workshop", "volume": "", "issue": "", "pages": "215--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saitou, T., Goto, M., Unoki, M., & Akagi, M. (2007). Speech-to-Singing Synthesis: Converting Speaking Voices to Singing Voices by Controlling Acoustic Features Unique to Singing Voices. Applications of Signal Processing to Audio and Acoustics Workshop, 215-218.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Variable-length unit selection in TTS using structural syntactic cost", "authors": [ { "first": "C.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "C.-C", "middle": [], "last": "Hsia", "suffix": "" }, { "first": "J.-F", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J.-F", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "IEEE Trans. Audio, Speech, Lang. Process", "volume": "15", "issue": "4", "pages": "1227--1235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, C.-H., Hsia, C.-C., Chen, J.-F., & Wang, J.-F. (2007). Variable-length unit selection in TTS using structural syntactic cost. IEEE Trans. Audio, Speech, Lang. Process., 15(4), 1227-1235.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "HMM-based Mandarin Singing Voice Synthesis 79", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "HMM-based Mandarin Singing Voice Synthesis 79", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Using Tailored Synthesis Units and Question Sets", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Using Tailored Synthesis Units and Question Sets", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "The HMM-based Speech Synthesis System (HTS) Version 2.0. The 6th ISCA Workshop on Speech Synthesis", "authors": [ { "first": "H", "middle": [], "last": "Zen", "suffix": "" }, { "first": "T", "middle": [], "last": "Nose", "suffix": "" }, { "first": "J", "middle": [], "last": "Yamagishi", "suffix": "" }, { "first": "S", "middle": [], "last": "Sako", "suffix": "" }, { "first": "T", "middle": [], "last": "Masuko", "suffix": "" }, { "first": "A", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "K", "middle": [], "last": "Tokuda", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "294--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zen, H., Nose, T., Yamagishi, J., Sako, S., Masuko, T., Black, A.W., & Tokuda, K. (2007). The HMM-based Speech Synthesis System (HTS) Version 2.0. The 6th ISCA Workshop on Speech Synthesis, 294-299.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "A Hidden Semi-Markov Model-Based Speech Synthesis System", "authors": [ { "first": "H", "middle": [], "last": "Zen", "suffix": "" }, { "first": "K", "middle": [ "T" ], "last": "Tokuda", "suffix": "" }, { "first": "T", "middle": [], "last": "Masuko", "suffix": "" }, { "first": "T", "middle": [], "last": "Kobayasih", "suffix": "" }, { "first": "T", "middle": [], "last": "Kitamura", "suffix": "" } ], "year": 2007, "venue": "IEICE Trans. Inf. & Sys", "volume": "90", "issue": "5", "pages": "825--834", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zen, H., Tokuda, K. T., Masuko, T., Kobayasih, T., & Kitamura, T. (2007). A Hidden Semi-Markov Model-Based Speech Synthesis System. IEICE Trans. Inf. & Sys., 90(5), 825-834.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "A Corpus-Based Concatenative Mandarin Singing voice Synthesis System", "authors": [ { "first": "S.-S", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Q.-C", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D.-D", "middle": [], "last": "Wang", "suffix": "" }, { "first": "X.-H", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2008, "venue": "International Conference on Machine Learning and Cybernetics", "volume": "", "issue": "", "pages": "2695--2699", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, S.-S., Chen, Q.-C., Wang, D.-D., & Yang, X.-H. (2008). A Corpus-Based Concatenative Mandarin Singing voice Synthesis System. 2008 International Conference on Machine Learning and Cybernetics, 2695-2699.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "DAFX-Digital Audio Effects", "authors": [ { "first": "U", "middle": [], "last": "Z\u00f6lzer", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "68--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z\u00f6lzer, U. (2002). DAFX-Digital Audio Effects. John Wiley & Sons, Chapter 3, 68-69.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Alleviating the One-to-many Mapping Problem in Voice Conversion with Context-dependent Modeling", "authors": [ { "first": "E", "middle": [], "last": "Godoy", "suffix": "" }, { "first": "O", "middle": [], "last": "Rosec", "suffix": "" }, { "first": "T", "middle": [], "last": "Chonavel", "suffix": "" } ], "year": 2009, "venue": "Proc. INTERSPEECH", "volume": "", "issue": "", "pages": "1627--1630", "other_ids": {}, "num": null, "urls": [], "raw_text": "Godoy, E., Rosec, O., & Chonavel, T. (2009). Alleviating the One-to-many Mapping Problem in Voice Conversion with Context-dependent Modeling. Proc. INTERSPEECH 2009, 1627-1630.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Voice Conversion Using Dynamic Frequency Warping with Amplitude Scaling, for Parallel or Nonparallel Corpora", "authors": [ { "first": "E", "middle": [], "last": "Godoy", "suffix": "" }, { "first": "O", "middle": [], "last": "Rosec", "suffix": "" }, { "first": "T", "middle": [], "last": "Chonavel", "suffix": "" } ], "year": 2012, "venue": "Speech, and Language Processing", "volume": "20", "issue": "", "pages": "1313--1323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Godoy, E., Rosec, O., & Chonavel, T. (2012). Voice Conversion Using Dynamic Frequency Warping with Amplitude Scaling, for Parallel or Nonparallel Corpora. IEEE trans. Audio, Speech, and Language Processing, 20, 1313-1323.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "A Discrete-cepstrum Based Spectrum-envelope Estimation Scheme and Its Example Application of Voice Transformation", "authors": [ { "first": "H", "middle": [ "Y" ], "last": "Gu", "suffix": "" }, { "first": "S", "middle": [ "F" ], "last": "Tsai", "suffix": "" } ], "year": 2009, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "14", "issue": "4", "pages": "363--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu, H. Y., & Tsai, S. F. (2009). A Discrete-cepstrum Based Spectrum-envelope Estimation Scheme and Its Example Application of Voice Transformation. International Journal of Computational Linguistics and Chinese Language Processing, 14(4), 363-382.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "An Improved Voice Conversion Method Using Segmental GMMs and Automatic GMM Selection", "authors": [ { "first": "H", "middle": [ "Y" ], "last": "Gu", "suffix": "" }, { "first": "S", "middle": [ "F" ], "last": "Tsai", "suffix": "" } ], "year": 2011, "venue": "Int. Congress on Image and Signal Processing", "volume": "", "issue": "", "pages": "2395--2399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu, H. Y. , & Tsai, S. F. (2011). An Improved Voice Conversion Method Using Segmental GMMs and Automatic GMM Selection. Int. Congress on Image and Signal Processing, Shanghai, China, 2395-2399.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Analysis of a Complex of Statistical Variables into Principal Components", "authors": [ { "first": "H", "middle": [], "last": "Hotelling", "suffix": "" } ], "year": 1933, "venue": "Journal of Educational Psychology", "volume": "24", "issue": "6", "pages": "417--441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hotelling, H. (1933). Analysis of a Complex of Statistical Variables into Principal Components. Journal of Educational Psychology, 24(6), 417-441.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Principal Component Analysis", "authors": [ { "first": "I", "middle": [ "T" ], "last": "Jolliffe", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jolliffe, I. T. (2002). Principal Component Analysis, second edition, New York: Springer-Verlag, 2002.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Pitch detection with average magnitude difference function using adaptive threshold algorithm for estimating shimmer and jitter. 20-th Annual Int", "authors": [ { "first": "H", "middle": [ "Y" ], "last": "Kim", "suffix": "" } ], "year": 1998, "venue": "Conf. of the IEEE Engineering in Medicine and Biology Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, H. Y. et al. (1998). Pitch detection with average magnitude difference function using adaptive threshold algorithm for estimating shimmer and jitter. 20-th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, Hong Kong, China, 1998.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "A Comparative Study of Histogram Equalization (HEQ) for Robust Speech Recognition", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [ "M" ], "last": "Yeh", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2007, "venue": "International Journal od Computational Linguistics and Chinese Language Processing", "volume": "12", "issue": "2", "pages": "217--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, S. H., Yeh, Y. M., & Chen, B. (2007). A Comparative Study of Histogram Equalization (HEQ) for Robust Speech Recognition. International Journal od Computational Linguistics and Chinese Language Processing, 12(2), 217-238.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Harmonic plus noise models for speech, combined with statistical methods, for speech and speaker modification", "authors": [ { "first": "Y", "middle": [], "last": "Stylianou", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stylianou, Y. (1996). Harmonic plus noise models for speech, combined with statistical methods, for speech and speaker modification, Ph.D. thesis, Ecole Nationale Sup\u00e8rieure des T\u00e9l\u00e9communications, Paris, France, 1996.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Continuous Probabilistic Transform for Voice Conversion", "authors": [ { "first": "Y", "middle": [], "last": "Stylianou", "suffix": "" }, { "first": "O", "middle": [], "last": "Capp\u00e9", "suffix": "" }, { "first": "E", "middle": [], "last": "Moulines", "suffix": "" } ], "year": 1998, "venue": "IEEE trans. Speech and Audio Processing", "volume": "6", "issue": "2", "pages": "131--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stylianou, Y., Capp\u00e9, O., & Moulines, E. (1998). Continuous Probabilistic Transform for Voice Conversion. IEEE trans. Speech and Audio Processing, 6(2), 131-142.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Voice Conversion Based on Maximum-likelihood Estimation of Spectral Parameter Trajectory", "authors": [ { "first": "T", "middle": [], "last": "Toda", "suffix": "" }, { "first": "A", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "K", "middle": [], "last": "Tokuda", "suffix": "" } ], "year": 2007, "venue": "Speech, and Language Processing", "volume": "15", "issue": "", "pages": "2222--2235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toda, T., Black, A. W., & Tokuda, K. (2007). Voice Conversion Based on Maximum-likelihood Estimation of Spectral Parameter Trajectory. IEEE trans. Audio, Speech, and Language Processing, 15, 2222-2235.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Histogram Equalization of Speech Representation for Robust Speech Recognition", "authors": [ { "first": "A", "middle": [], "last": "Torre", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "Peinado", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Segura", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Perez-Cordoba", "suffix": "" }, { "first": "M", "middle": [ "C" ], "last": "Bentez", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Rubio", "suffix": "" } ], "year": 2005, "venue": "IEEE trans. Speech and Audio Processing", "volume": "13", "issue": "3", "pages": "355--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Torre, A., Peinado, A. M., Segura, J. C., Perez-Cordoba, J. L., Bentez, M. C., & Rubio, A. J. (2005). Histogram Equalization of Speech Representation for Robust Speech Recognition. IEEE trans. Speech and Audio Processing, 13(3), 355-366.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Voice Transformation Using PSOLA Technique", "authors": [ { "first": "H", "middle": [], "last": "Valbret", "suffix": "" }, { "first": "E", "middle": [], "last": "Moulines", "suffix": "" }, { "first": "J", "middle": [ "P" ], "last": "Tubach", "suffix": "" } ], "year": 1992, "venue": "Speech Communication", "volume": "11", "issue": "2-3", "pages": "175--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valbret, H., Moulines, E., & Tubach, J. P. (1992).Voice Transformation Using PSOLA Technique. Speech Communication, 11(2-3), 175-187.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Suppression of acoustic noise in speech using spectral subtraction", "authors": [ { "first": "S", "middle": [ "F" ], "last": "Boll", "suffix": "" } ], "year": 1979, "venue": "IEEE Transactions on Acoustics Speech and Signal Processing", "volume": "27", "issue": "2", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boll, S. F. (1979). Suppression of acoustic noise in speech using spectral subtraction. IEEE Transactions on Acoustics Speech and Signal Processing, 27(2), 113-120.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "MVA processing of speech features", "authors": [ { "first": "C", "middle": [ "P" ], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Audio Speech and Language Processing", "volume": "15", "issue": "1", "pages": "257--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, C. P., & Bilmes, J. (2007). MVA processing of speech features. IEEE Transactions on Audio Speech and Language Processing, 15(1), 257-270.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Recursive estimation of non-sta-tionary noise using iterative stochastic approximation for robust speech recognition", "authors": [ { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "J", "middle": [], "last": "Droppo", "suffix": "" }, { "first": "A", "middle": [], "last": "Acero", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Speech Audio Process", "volume": "11", "issue": "6", "pages": "568--580", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deng, L., Droppo, J., & Acero, A. (2003). Recursive estimation of non-sta-tionary noise using iterative stochastic approximation for robust speech recognition. IEEE Transactions on Speech Audio Process, 11(6), 568-580.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Cepstral shape normalization for robust speech recognition", "authors": [ { "first": "J", "middle": [], "last": "Du", "suffix": "" }, { "first": "R", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing", "volume": "", "issue": "", "pages": "4389--4392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Du, J., & Wang, R. (2008). Cepstral shape normalization for robust speech recognition. In Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing, 4389-4392.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Cepstral analysis technique for automatic speaker verification", "authors": [ { "first": "S", "middle": [], "last": "Furui", "suffix": "" } ], "year": 1981, "venue": "IEEE Transactions on Acoustics Speech and Signal Processing", "volume": "29", "issue": "2", "pages": "254--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Furui, S. (1981). Cepstral analysis technique for automatic speaker verification. IEEE Transactions on Acoustics Speech and Signal Processing, 29(2), 254-272.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Gauiain", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "2", "issue": "2", "pages": "291--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gauiain, J. L., & Lee, C. H. (1994). Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains. IEEE Transactions on Speech and Audio Processing, 2(2), 291-298.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Evaluating long-term spectral subtraction for reverberant ASR", "authors": [ { "first": "D", "middle": [], "last": "Gelbart", "suffix": "" }, { "first": "N", "middle": [], "last": "Morgan", "suffix": "" } ], "year": 2001, "venue": "Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "103--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gelbart, D., & Morgan, N. (2001). Evaluating long-term spectral subtraction for reverberant ASR. In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, 103-106.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Quantile based histogram equalization for noise robust large vocabulary speech recognition", "authors": [ { "first": "F", "middle": [], "last": "Hilger", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2006, "venue": "IEEE Transactions on Audio, Speech and Language Processing", "volume": "14", "issue": "3", "pages": "845--854", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hilger, F., & Ney, H. (2006). Quantile based histogram equalization for noise robust large vocabulary speech recognition. IEEE Transactions on Audio, Speech and Language Processing, 14(3), 845-854.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "The AURORA experimental framework for the performance evaluations of speech recognition systems under noisy conditions", "authors": [ { "first": "H", "middle": [ "G" ], "last": "Hirsch", "suffix": "" }, { "first": "D", "middle": [], "last": "Pearce", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 2000 Automatic Speech Recognition Challenges for the new Millenium", "volume": "", "issue": "", "pages": "181--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirsch, H. G., & Pearce, D. (2000). The AURORA experimental framework for the performance evaluations of speech recognition systems under noisy conditions. In Proceedings of the 2000 Automatic Speech Recognition Challenges for the new Millenium, 181-188.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "The study of q-logarithmic modulation spectral normalization for robust speech recognition", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Hsu", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Fang", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Hung", "suffix": "" } ], "year": 2012, "venue": "Proceedings of International Conference on System Science and Engineering", "volume": "", "issue": "", "pages": "183--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, C. H., Fang, H. T., & Hung, J. W. (2012). The study of q-logarithmic modulation spectral normalization for robust speech recognition. In Proceedings of International Conference on System Science and Engineering, 183-186.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Modulation spectrum exponential weighting for robust speech recognition", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Hung", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Fan", "suffix": "" }, { "first": "Y", "middle": [ "C" ], "last": "Lian", "suffix": "" } ], "year": 2012, "venue": "Proceedings of International Conference on ITS Telecommunications", "volume": "", "issue": "", "pages": "812--816", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hung, J. W., Fan, H. T., & Lian, Y. C. (2012). Modulation spectrum exponential weighting for robust speech recognition. In Proceedings of International Conference on ITS Telecommunications, 812-816.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Improved modulation spectrum enhancement methods for robust speech recognition", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Hung", "suffix": "" }, { "first": "W", "middle": [ "H" ], "last": "Tu", "suffix": "" }, { "first": "C", "middle": [ "C" ], "last": "Lai", "suffix": "" } ], "year": 2012, "venue": "Proceedings of Signal Processing", "volume": "92", "issue": "", "pages": "2791--2814", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hung, J. W., Tu, W. H., & Lai, C. C. (2012). Improved modulation spectrum enhancement methods for robust speech recognition. In Proceedings of Signal Processing, 92(11), 2791-2814.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "New approaches for domain transformation and parameter combination for improved accuracy in parallel model combination techniques", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Hung", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Shen", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Lee", "suffix": "" } ], "year": 2001, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "9", "issue": "8", "pages": "842--855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hung, J. W., Shen, J. L., & Lee, L. S. (2001). New approaches for domain transformation and parameter combination for improved accuracy in parallel model combination techniques. IEEE Transactions on Speech and Audio Processing, 9(8), 842-855.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Leggetter", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 1995, "venue": "Computer Speech and Language", "volume": "9", "issue": "2", "pages": "171--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leggetter, C. J., & Woodland, P. C. (1995). Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech and Language, 9(2), 171-185.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "A vector Taylor series approach for environment-independent speech recognition", "authors": [ { "first": "P", "middle": [ "J" ], "last": "Moreno", "suffix": "" }, { "first": "B", "middle": [], "last": "Raj", "suffix": "" }, { "first": "R", "middle": [ "M" ], "last": "Stern", "suffix": "" } ], "year": 1996, "venue": "Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing", "volume": "2", "issue": "", "pages": "733--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moreno, P. J., Raj, B., & Stern, R. M. (1996). A vector Taylor series approach for environment-independent speech recognition. In Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing, 2, 733-736.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Improved signal-to-noise ratio estimation for speech enhancement", "authors": [ { "first": "C", "middle": [], "last": "Plapous", "suffix": "" }, { "first": "C", "middle": [], "last": "Marro", "suffix": "" }, { "first": "P", "middle": [], "last": "Scalart", "suffix": "" } ], "year": 2006, "venue": "IEEE Transactions on Acoustics Speech and Signal Processing", "volume": "14", "issue": "6", "pages": "2098--2108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Plapous, C., Marro, C., & Scalart, P. (2006). Improved signal-to-noise ratio estimation for speech enhancement. IEEE Transactions on Acoustics Speech and Signal Processing, 14(6), 2098-2108.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Multiband and adaptation approaches to robust speech recognition", "authors": [ { "first": "S", "middle": [], "last": "Tiberewala", "suffix": "" }, { "first": "H", "middle": [], "last": "Hermansky", "suffix": "" } ], "year": 1997, "venue": "Proceedings of European Conference on Speech Communication and Technology", "volume": "25", "issue": "", "pages": "2619--2622", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiberewala, S., & Hermansky, H. (1997). Multiband and adaptation approaches to robust speech recognition. In Proceedings of European Conference on Speech Communication and Technology, 25(1-3), 2619-2622.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Sub-band Modulation Spectrum Compensation for Robust Speech Recognition", "authors": [ { "first": "W", "middle": [ "H" ], "last": "Tu", "suffix": "" }, { "first": "S", "middle": [ "Y" ], "last": "Huang", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Hung", "suffix": "" } ], "year": 2009, "venue": "Proceedings of IEEE Workshop on Automatic Speech Recognition & Understanding", "volume": "", "issue": "", "pages": "261--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tu, W. H., Huang, S. Y., & Hung, J. W. (2009). Sub-band Modulation Spectrum Compensation for Robust Speech Recognition. In Proceedings of IEEE Workshop on Automatic Speech Recognition & Understanding, 261-265.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "Cepstral gain normalization for noise robust speech recognition", "authors": [ { "first": "S", "middle": [], "last": "Yoshizawa", "suffix": "" }, { "first": "N", "middle": [], "last": "Hayasaka", "suffix": "" }, { "first": "N", "middle": [], "last": "Wada", "suffix": "" }, { "first": "Y", "middle": [], "last": "Miyanaga", "suffix": "" } ], "year": 2004, "venue": "Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing", "volume": "1", "issue": "", "pages": "209--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshizawa, S., Hayasaka, N., Wada, N., & Miyanaga, Y. (2004). Cepstral gain normalization for noise robust speech recognition. In Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing, 1, I-209-212.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "The individuals listed below are reviewers of this journal during the year of 2013. The IJCLCLP Editorial Board extends its gratitude to these volunteers for their important contributions to this publication", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The individuals listed below are reviewers of this journal during the year of 2013. The IJCLCLP Editorial Board extends its gratitude to these volunteers for their important contributions to this publication, to our association, and to the profession.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "Learning to Find Translations and Transliterations on the Web based on Conditional Random Fields", "authors": [ { "first": "Joseph", "middle": [ "Z" ], "last": "Chang", "suffix": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "Jyh-Shing Roger", "middle": [], "last": "Jang", "suffix": "" } ], "year": null, "venue": "", "volume": "18", "issue": "", "pages": "19--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, Joseph Z. Jason S. Chang, and Jyh-Shing Roger Jang. Learning to Find Translations and Transliterations on the Web based on Conditional Random Fields; 18(1): 19-46", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "A Definition-based Shared-concept Extraction within Groups of Chinese Synonyms: A Study Utilizing the Extended Chinese Synonym Forest", "authors": [ { "first": "F", "middle": [ "Y" ], "last": "Chao", "suffix": "" }, { "first": "Siaw-Fong", "middle": [], "last": "August", "suffix": "" }, { "first": "", "middle": [], "last": "Chung", "suffix": "" } ], "year": null, "venue": "", "volume": "18", "issue": "", "pages": "35--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao, F. Y. August and Siaw-Fong Chung. A Definition-based Shared-concept Extraction within Groups of Chinese Synonyms: A Study Utilizing the Extended Chinese Synonym Forest; 18(2): 35-56", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "HMM-based Mandarin Singing Voice Synthesis Using Tailored Synthesis Units and Question Sets", "authors": [ { "first": "Ju-Yun Yi-Chin", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Chung-Hsien", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": null, "venue": "", "volume": "18", "issue": "", "pages": "63--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheng, Ju-Yun Yi-Chin Huang, and Chung-Hsien Wu. HMM-based Mandarin Singing Voice Synthesis Using Tailored Synthesis Units and Question Sets; 18(4): 63-80", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "A Semantic-Based Approach to Noun-Noun Compound Interpretation", "authors": [ { "first": "You-Shan", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" } ], "year": null, "venue": "", "volume": "18", "issue": "", "pages": "45--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chung, You-shan and Keh-Jiann Chen. A Semantic-Based Approach to Noun-Noun Compound Interpretation; 18(4): 45-62", "links": null }, "BIBREF118": { "ref_id": "b118", "title": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics.", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally.", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Publishing pertinent journals, proceedings and newsletters", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Publishing pertinent journals, proceedings and newsletters.", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics.", "links": null }, "BIBREF122": { "ref_id": "b122", "title": "Maintaining contact with international computational linguistics academic organizations", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maintaining contact with international computational linguistics academic organizations.", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "Dealing with various other matters related to the development of computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dealing with various other matters related to the development of computational linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Example session of correcting the sentence, \"I have difficulty to understand English.\"Correcting Serial Grammatical Errors based on N-grams and Syntax 33", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Generate a phrase table for the statistical machine translation modelsfor each group (Section 3.2.3) Outline of the process used to generate TM.", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Sample phrase translations for a trigram group", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Sample trigram group accused to VERB ||| accused of VERB-ing ||| 0.47 accused of VERB ||| accused of VERB-ing ||| 0.47", "num": null, "uris": null }, "FIGREF4": { "type_str": "figure", "text": "Sample back-off translations", "num": null, "uris": null }, "FIGREF5": { "type_str": "figure", "text": "Outline of the process used to generate TM bo", "num": null, "uris": null }, "FIGREF6": { "type_str": "figure", "text": "Run statistical machine translation", "num": null, "uris": null }, "FIGREF7": { "type_str": "figure", "text": "Outline of the process used to correct the sentence at run-time", "num": null, "uris": null }, "FIGREF8": { "type_str": "figure", "text": ", N1 and N2 assume parallel roles in situations like \u624b\u8173\u770b\u8d77\u4f86\u5f88\u4e7e\u6de8 Shou-jiao kan-qilai hen ganjing 'Hands and feet look tidy,' \u4fee\u7406\u9418\u9336 xiuli zhong-biao 'repair a clock (watch),' \u8b66\u6c11\u5408\u4f5c\u6253\u64ca\u72af\u7f6a Jing-min hezuo daji fanzui 'The police and the people join hands to fight crime.'ASemantic-Based Approach to Noun-Noun Compound Interpretation", "num": null, "uris": null }, "FIGREF9": { "type_str": "figure", "text": "51", "num": null, "uris": null }, "FIGREF10": { "type_str": "figure", "text": "Below are some examples of such mappings. (Frame names have all capital letters, while FEs have only the initial letters as capital letters.) Simple relation (subclass: host-attribute-value) FOOD N1-N2=Material-Food e.g. \u7389\u7c73\u9905 yumi-bing 'corn cake,' \u7da0\u8c46\u7cd5 lu-dou gao 'green beans cake,' \u725b \u8089\u6e6f niu-rou tang 'beef soup,' \u5976\u8336 nai-cha 'milk tea,' \u860b\u679c\u6c41 pingguo-zhi 'apple juice,' \u82b1\u751f\u91ac huasheng-jiang 'peanut butter' CLOTHING N1-N2=Material-Clothing e.g. \u8349\u978b cao-xie 'straw shoes,' \u6728\u978b mu-xie 'wooden shoes,' \u76ae\u978b pi-xie 'leather shoes,' \u81a0\u978b jiao-xie 'plastic shoes,' \u8c79\u76ae\u5e3d bao-pi mao 'leopard-skin hat,' \u6bdb\u8863 mao-yi 'sweater,' \u5e03\u886b bu-shan 'cotton shirt' e.g. \u96d9\u5e95\u8239 shuang-di chuan 'double-bottom,' \u9435\u6bbc\u8239 tie-ke chuan 'iron ship' BUILDING_SUBPARTS N1-N2=Whole-Building_part e.g. \u9662\u7246 yuan-qiang 'yard wall,' \u5c4b\u7c37 wu-yian 'roof' e.g. \u5f13 \u7bad \u624b gong-jian-shou 'archer,' \u6a02 \u5e2b yue-shi 'musician,' \u6c34 \u96fb \u5de5 shui-dian-gong 'utilities technician' MONEY N1-N2=Buyer-Money e.g. \u5bb6\u9577\u8cbb jiazhang-fei 'parental fee' N1-N2=Goods-Money \u66f8\u6b3e shu-kuan 'money for buying books,' \u7530\u79df tian-zu 'land rent'", "num": null, "uris": null }, "FIGREF11": { "type_str": "figure", "text": "COMMERCE_BUYLUs: buy.v, purchase_(act).n, purchase.v Core FEs: Buyer, Goods, Seller Non-core FEs (not exhaustively listed): Manner, Means, Money, Purpose, Purpose_of_Goods, etc.", "num": null, "uris": null }, "FIGREF12": { "type_str": "figure", "text": "An example of the read speech and singing voice for the sentence \"\u5929\u6696\u82b1\u958b\u4e0d\u4f5c\u5de5,\"which is uttered and sung by the same person.", "num": null, "uris": null }, "FIGREF13": { "type_str": "figure", "text": "Comparison with generated F0 patterns and F0 patterns in the score", "num": null, "uris": null }, "FIGREF14": { "type_str": "figure", "text": "Comparison between the original singing and the synthesized singing pitch contoursHMM-based Mandarin Singing Voice Synthesis 75 Using Tailored Synthesis Units and Question Sets", "num": null, "uris": null }, "FIGREF15": { "type_str": "figure", "text": "Result of preference test with/without vibrato 76Ju-YunCheng et al.", "num": null, "uris": null }, "FIGREF16": { "type_str": "figure", "text": "\u97f3\u7bc0/song/\u7684\u4e00\u500b\u97f3\u6846\u7684\u4e09\u689d\u983b\u8b5c\u5305\u7d61\u66f2\u7dda", "num": null, "uris": null }, "FIGREF17": { "type_str": "figure", "text": "\u53d6\u539f\u59cb MFCC \u4e4b c 1 \u7279\u5fb5\u5e8f\u5217\u53ca\u5176\u505a LPC \u5206\u6790\u6240\u5f97\u7684\u7dda\u6027\u9810\u4f30\u5e8f\u5217\uff0c\u5728\u6642\u57df\u8207\u529f\u7387\u983b\u8b5c \u57df\u7684\u66f2\u7dda\u5206\u5225\u986f\u793a\u65bc\u5716 2(a)\u8207\u5716 2(b)\u3002\u5f9e\u5716 2(a)\u53ef\u770b\u51fa\uff0c\u4e8c\u8005\u66f2\u7dda\u4e26\u6c92\u6709\u592a\u5927\u5dee\u7570\uff0c\u986f \u793a LPCF \u53ef\u80fd\u7121\u6cd5\u6709\u6548\u6291\u5236 MFCC \u4e2d\u96dc\u8a0a\u9020\u6210\u7684\u5931\u771f\uff0c\u4f46\u5f9e\u5716 2(b)\u53ef\u770b\u5230\uff0cLPCF \u7d04\u7565 \u5448\u73fe\u5f37\u8abf\u4f4e\u983b\u3001\u6291\u5236\u9ad8\u983b\u7684\u6548\u679c\uff0c\u6b64\u73fe\u8c61\u61c9\u8a72\u5c0d\u65bc\u8a9e\u97f3\u7279\u5fb5\u7684\u5f37\u5065\u6027\u6709\u6240\u5e6b\u52a9\u3002 \u8a0a\u96dc\u6bd4 0 dB \u4e4b\u96dc\u8a0a\u74b0\u5883\u4e0b\uff0cMFCC \u548c\u7d93 LPCF \u8655\u7406\u5f8c\u7684 c 1 \u7279\u5fb5\u4e4b(a)\u5e8f\u5217 \u6ce2\u5f62\u5716 (b)\u529f\u7387\u983b\u8b5c\u5bc6\u5ea6(PSD)\u5716 \u985e\u4f3c\u4e4b\u524d\u7684\u7e6a\u5716\uff0c\u6211\u5011\u53e6\u5916\u6c42\u53d6\u4e86\u4ee5\u4e0b\u5e7e\u7a2e\u8a9e\u97f3\u7279\u5fb5\u5728\u6642\u57df\u8207\u529f\u7387\u983b\u8b5c\u57df\u4e2d\u7684\u6ce2\u5f62 \u5716\uff0c\u5206\u8ff0\u5982\u4e0b\uff1a (1) \u4e7e\u6de8\u74b0\u5883\u4e0b\uff0c\u7d93\u904e CMVN \u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\u3001\u53ca\u5176\u518d\u7d93\u904e LPCF \u8655\u7406\u5f8c\u7684\u7279\u5fb5\uff0c\u5176\u6642\u9593 \u5e8f\u5217\u8207 PSD \u5716\uff0c\u7e6a\u65bc\u5716 3(a)\u8207\u5716 3", "num": null, "uris": null }, "FIGREF18": { "type_str": "figure", "text": "\u4e7e\u6de8\u74b0\u5883\u4e0b\uff0cCMVN \u548c\u7d93 LPCF \u8655\u7406\u5f8c\u7684 c 1 \u7279\u5fb5\u4e4b(a)\u5e8f\u5217\u6ce2\u5f62\u5716 (b)\u529f\u7387 \u983b\u8b5c\u5bc6\u5ea6(PSD)\u5716 \u8a0a\u96dc\u6bd4\u70ba 0 dB \u4e4b\u96dc\u8a0a\u74b0\u5883\u4e0b\u7d93\u904e CMVN \u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\u3001\u53ca\u5176\u518d\u7d93\u904e LPCF \u8655\u7406\u5f8c \u7684\u7279\u5fb5\uff0c\u5176\u6642\u9593\u5e8f\u5217\u8207 PSD \u5716\uff0c\u7e6a\u65bc\u5716 4(a)\u8207\u5716 4\u8a0a\u96dc\u6bd4 0 dB \u4e4b\u96dc\u8a0a\u74b0\u5883\u4e0b\uff0cCMVN \u548c\u7d93 LPCF \u8655\u7406\u5f8c\u7684 c 1 \u7279\u5fb5\u4e4b(a)\u5e8f\u5217 \u6ce2\u5f62\u5716 (b)\u529f\u7387\u983b\u8b5c\u5bc6\u5ea6(PSD)\u5716 \u4e7e\u6de8\u74b0\u5883\u4e0b\uff0c\u7d93\u904e CHN \u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\u3001\u53ca\u5176\u518d\u7d93\u904e LPCF \u8655\u7406\u5f8c\u7684\u7279\u5fb5\uff0c\u5176\u6642\u9593\u5e8f \u5217\u8207 PSD \u5716\uff0c\u7e6a\u65bc\u5716 5(a)\u8207\u5716 5\u4e7e\u6de8\u74b0\u5883\u4e0b\uff0c CHN \u548c\u7d93 LPCF \u8655\u7406\u5f8c\u7684\u7684 c 1 \u7279\u5fb5\u4e4b(a)\u5e8f\u5217\u6ce2\u5f62\u5716 (b)\u529f\u7387 \u983b\u8b5c\u5bc6\u5ea6(PSD)\u5716 \u8a0a\u96dc\u6bd4\u70ba 0 dB \u4e4b\u96dc\u8a0a\u74b0\u5883\u4e0b\u7d93\u904e CHN \u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\u3001\u53ca\u5176\u518d\u7d93\u904e LPCF \u8655\u7406\u5f8c\u7684 \u7279\u5fb5\uff0c\u5176\u6642\u9593\u5e8f\u5217\u8207 PSD \u5716\uff0c\u7e6a\u65bc\u5716 6(a)\u8207\u5716 6(b)\u3002 \u5f9e\u9019\u4e9b\u5716\uff0c\u6211\u5011\u53ef\u4ee5\u770b\u51fa\u4ee5\u4e0b\u7684\u5e7e\u500b\u73fe\u8c61\uff1a (1) \u7121\u8ad6\u662f\u4e7e\u6de8\u6216\u96dc\u8a0a\u5e72\u64fe\u7684\u74b0\u5883\u4e0b\uff0c\u539f\u7279\u5fb5\u5e8f\u5217\u8207 LPCF \u8655\u7406\u5f8c\u7684\u5e8f\u5217\u5728\u6642\u57df\u4e0a\u4e0b\u8d77\u4f0f \u7684\u6ce2\u5f62\u5341\u5206\u985e\u4f3c\uff0c\u9019\u986f\u793a\u4e86 LPCF \u4e0d\u6703\u660e\u986f\u6539\u8b8a\u539f\u59cb\u7279\u5fb5\u5e8f\u5217\u7684\u76f8\u4f4d(phase) \u3002 \u8a0a\u96dc\u6bd4 0 dB \u4e4b\u96dc\u8a0a\u74b0\u5883\u4e0b\uff0cCHN \u548c\u7d93 LPCF \u8655\u7406\u5f8c\u7684 c 1 \u7279\u5fb5\u4e4b(a)\u5e8f\u5217\u6ce2 \u5f62\u5716 (b)\u529f\u7387\u983b\u8b5c\u5bc6\u5ea6(PSD)\u5716 \u6700\u5f8c\uff0c\u6211\u5011\u518d\u5229\u7528 Aurora-2 \u7684 Set A \u5176\u4e0d\u540c\u96dc\u8a0a\u7a0b\u5ea6\u5f71\u97ff\u4e0b\u8a9e\u97f3\u7279\u5fb5 1 c \u5e8f\u5217\u7684 PSD \u4e4b\u5e73\u5747\u7684\u6bd4\u8f03\uff0c\u4f86\u89c0\u5bdf LPCF \u6cd5\u6240\u80fd\u9054\u5230\u7684\u6548\u679c\u3002\u5716 7(a)\u8207\u5716 7(b)\u5206\u5225\u70ba\u300cMFCC \u8655\u7406\u300d \u8207\u300cMFCC \u52a0\u4e0a LPCF \u8655\u7406\u300d\u5f8c\u5728\u4e09\u7a2e\u8a0a\u96dc\u6bd4\u74b0\u5883\u7684\u8a9e\u97f3\u7279\u5fb5 c 1 \u7684 PSD \u5e73\u5747\u4e4b\u66f2\u7dda\uff0c\u5716 8(a)\u8207\u5716 8(b)\u5206\u5225\u70ba\u300cCMVN \u8655\u7406\u300d\u8207\u300cCMVN \u52a0\u4e0a LPCF \u8655\u7406\u300d\u5f8c\u5728\u4e09\u7a2e\u8a0a\u96dc\u6bd4\u74b0\u5883 \u7684\u8a9e\u97f3\u7279\u5fb5 c 1 \u7684 PSD \u5e73\u5747\u4e4b\u66f2\u7dda\uff0c\u5716 9(a)\u8207\u5716 9(b)\u5206\u5225\u70ba\u300cCHN \u8655\u7406\u300d\u8207\u300cCHN \u52a0\u4e0a LPCF \u8655\u7406\u300d\u5f8c\u5728\u4e09\u7a2e\u8a0a\u96dc\u6bd4\u74b0\u5883\u7684\u8a9e\u97f3\u7279\u5fb5 c 1 \u7684 PSD \u5e73\u5747\u4e4b\u66f2\u7dda\uff0c\u5f9e\u9019\u4e9b\u5716\u5f62\u53ef\u4ee5\u770b \u51fa\uff0c\u7576\u52a0\u5165 LPCF \u8655\u7406\u5f8c\uff0c\u7279\u5225\u5c0d\u65bc CMVN \u8207 CHN \u6cd5\u9810\u8655\u7406\u7684\u7279\u5fb5\u800c\u8a00\uff0c\u5404\u8a0a\u96dc\u6bd4\u74b0 \u5883\u4e0b\u7684 PSD \u5e73\u5747\u66f2\u7dda\u90fd\u80fd\u8f03\u70ba\u63a5\u8fd1\uff0c\u4ee3\u8868\u4e86 LPCF \u8207 CMVN \u53ca CHN \u6709\u826f\u597d\u7684\u52a0\u6210\u6027\uff0c \u53ef\u4ee5\u9032\u4e00\u6b65\u964d\u4f4e CMVN \u6216 CHN \u9810\u8655\u7406\u5f8c\u5269\u9918\u7684\u5931\u771f\uff0c\u9032\u800c\u4f7f\u4e0d\u540c\u8a0a\u96dc\u6bd4\u4e0b\u7684\u7279\u5fb5\u7279\u6027 \u66f4\u70ba\u5339\u914d\uff0c\u552f\u6709 MFCC \u52a0\u5165 LPCF \u8655\u7406\u5f8c\uff0c\u4e26\u672a\u5341\u5206\u660e\u986f\u7684\u4f7f\u7279\u5fb5\u66f4\u70ba\u5339\u914d\u3002\u986f\u793a LPCF \u6cd5\u4e0d\u592a\u9069\u5408\u4f5c\u5728\u539f\u59cb MFCC \u7279\u5fb5\u4e0a\u3002 \u4e0d\u540c\u8a0a\u96dc\u6bd4\u4e0b\uff0cSet A \u4e4b 1001 \u53e5\u7684 MFCC \u5176(a)\u539f\u59cb c 1 \u7279\u5fb5\u5e8f\u5217\u8207(b)\u7d93 LPCF \u8655\u7406\u5f8c\u4e4b c 1 \u7279\u5fb5\u5e8f\u5217 \u7684\u5e73\u5747\u529f\u7387\u983b\u8b5c\u5bc6\u5ea6 (a) (b) \u5716 8. \u4e0d\u540c\u8a0a\u96dc\u6bd4\u4e0b\uff0cSet A \u4e4b 1001 \u53e5\u4e4b(a)\u7d93 CMVN \u8655\u7406\u5f8c\u4e4b c 1 \u7279\u5fb5\u5e8f\u5217 (b) \u7d93 CMVN+LPCF \u8655\u7406\u5f8c\u4e4b c 1 \u7279\u5fb5\u5e8f\u5217 \u7684\u5e73\u5747\u529f\u7387\u983b\u8b5c\u5bc6\u5ea6 (a) (b) \u5716 9. \u4e0d\u540c\u8a0a\u96dc\u6bd4\u4e0b\uff0cSet A \u4e4b 1001 \u53e5\u4e4b(a)\u7d93 CHN \u8655\u7406\u5f8c\u4e4b c 1 \u7279\u5fb5\u5e8f\u5217 (b) \u7d93 CHN+LPCF \u8655\u7406\u5f8c\u4e4b c 1 \u7279\u5fb5\u5e8f\u5217 \u7684\u5e73\u5747\u529f\u7387\u983b\u8b5c\u5bc6\u5ea6", "num": null, "uris": null }, "TABREF0": { "num": null, "content": "
4\u860a\u6db5\u53e5\u578b\u5206\u6790\u65bc\u6539\u9032\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u8b58\u5225\u7cfb\u7d71\u694a\u5584\u9806 \u7b493
\u601d\u4e92\u76f8\u885d\u7a81\uff0c\u9019\u6a23\u7684\u60c5\u6cc1\u6211\u5011\u5c31\u7a31\u4e4b\u70ba\u77db\u76fe\u860a\u6db5\u3002\u6216\u662f\u5169\u500b\u53e5\u5b50\u672c\u8eab\u5305\u6db5\u7684\u8cc7\u8a0a\u6beb\u7121\u95dc \u96a8\u5f8c(
\u4fc2\u9019\u6a23\u7684\u60c5\u6cc1\u6211\u5011\u5c31\u7a31\u4e4b\u70ba\u7368\u7acb\u860a\u6db5\uff0c\u85c9\u7531\u4e0a\u8ff0\u7684\u56db\u7a2e\u860a\u6db5\u95dc\u4fc2\u5c07\u53e5\u5b50\u4e4b\u9593\u7684\u860a\u6db5\u95dc\u4fc2
\u7d30\u5206\uff0c\u4f7f\u5f97\u6587\u5b57\u860a\u6db5\u7cfb\u8b58\u5225\u7684\u7814\u7a76\u66f4\u6709\u5176\u610f\u7fa9\u3002
\u672c\u7bc7\u91cd\u9ede\u5728\u8655\u7406\u7c21\u9ad4\u4e2d\u6587\u8207\u7e41\u9ad4\u4e2d\u6587\u65b9\u9762\u7684\u6587\u5b57\u860a\u6db5\uff0c\u4f7f\u7528 NTCIR-10 RITE-2 \u6240\u63d0
\u4f9b\u7684\u8a13\u7df4\u8cc7\u6599\u57fa\u65bc\u6a5f\u5668\u5b78\u7fd2\u7684\u65b9\u6cd5 SVM \u5efa\u7acb\u4e00\u500b\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u7cfb\u7d71\u3002\u7cfb\u7d71\u7684\u4e00\u958b\u59cb\u5148
\u5c07\u8f38\u5165\u7684\u6587\u53e5\u5c0d\u8cc7\u6599\u9032\u884c\u9810\u8655\u7406\uff0c\u7531\u65bc\u8655\u7406\u662f\u4e2d\u6587\u8cc7\u6599\u5fc5\u9808\u5148\u9032\u884c\u65b7\u8a5e\u4ee5\u4fbf\u63a5\u4e0b\u4f86\u7684\u5de5
\u4f5c\uff0c\u4f7f\u7528\u554f\u984c\u985e\u578b\u5206\u985e\u5c07\u53ef\u4ee5\u500b\u5225\u8655\u7406\u7684\u985e\u578b\u62bd\u51fa\u7279\u5225\u8655\u7406\uff0c\u63a5\u8457\u53e5\u5b50\u5c0d\u7d93\u7531\u7279\u5fb5\u64f7\u53d6
\u7684\u5b50\u7cfb\u7d71\u53d6\u5f97\u5404\u9805\u7279\u5fb5\u503c\uff0c\u6700\u5f8c\u5c07\u53d6\u7684\u7279\u5fb5\u503c\u4f7f\u7528 SVM \u5206\u985e\u8655\u7406\u3002
\u672c\u7bc7\u63a5\u4e0b\u4f86\u7ae0\u7bc0\u5982\u4e0b\uff0c\u5728\u7b2c\u4e8c\u6bb5\u4ecb\u7d39\u904e\u53bb\u7814\u7a76\u65b9\u6cd5\uff0c\u7b2c\u4e09\u6bb5\u5c07\u4ecb\u7d39\u7cfb\u7d71\u67b6\u69cb\u8207\u9810\u8655
\u7406\u90e8\u5206\u4ee5\u53ca\u7cfb\u7d71\u4f7f\u7528\u5230\u7684\u7279\u5fb5\u503c\u8ddf\u6211\u5011\u6240\u89c0\u5bdf\u5230\u7279\u6b8a\u985e\u578b\u554f\u984c\u5206\u6790\uff0c\u7b2c\u56db\u6bb5\u662f\u5be6\u9a57\u7d50\u679c
\u8207\u8a0e\u8ad6\u6700\u5f8c\u662f\u7d50\u8ad6\u8207\u672a\u4f86\u5de5\u4f5c\u3002
\u8868 1. \u5404\u7a2e\u860a\u6db5\u95dc\u4fc2\u4f8b\u53e5
\u985e\u578b\u4f8b\u53e5
t1\uff1a\u7570\u4f4d\u6027\u76ae\u819a\u708e\u597d\u767c\u65bc\u5177\u904e\u654f\u9ad4\u8cea\u7684\u5b30\u5e7c\u5152\u3001\u5152\u7ae5\u53ca\u9752\u5c11\u5e74\uff0c\u5e38\u898b\u75c7\u72c0
\u6b63\u5411\u860a\u6db5\u662f\u81c9\u3001\u9838\u3001\u624b\u8098\u7aa9\u3001\u819d\u7aa9\u3001\u6216\u56db\u80a2\u80cc\u5074\u7b49\u90e8\u4f4d\u51fa\u73fe\u6414\u7662\u7d05\u75b9\u3001\u76ae\u819a\u8b8a \u539a\u3001\u8b8a\u7c97\u7cd9\u3002
\u5920\u6e96\u78ba\u7684\u63a8\u65b7\u9019\u5169\u53e5\u5b50\u4e4b\u9593\u7684\u860a\u6db5\u95dc\u4fc2\u3002\u56e0\u6b64\u6587\u5b57\u860a\u6db5\u53ef\u4ee5\u61c9\u7528\u5728\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u5176\u4ed6\u9818 (F) t2\uff1a\u7570\u4f4d\u6027\u76ae\u819a\u708e\u7522\u751f\u7684\u767c\u7d05\u76ae\u75b9\u597d\u767c\u65bc\u81c9\u9830\u9838\u5074\u3001\u624b\u8098\u7aa9\u6216\u819d\u84cb\u7b49\u5f4e\u66f2 \u57df\u7814\u7a76\u4e2d\uff0c\u4f8b\u5982\u554f\u7b54\u7cfb\u7d71\u3001\u8cc7\u8a0a\u62bd\u53d6\u3001\u8cc7\u8a0a\u6aa2\u7d22\u3001\u6a5f\u5668\u7ffb\u8b6f(Dagan & Glickman, 2004 ; Ou & Yao, 2010)\u7b49\u7b49\u3002\u6587\u5b57\u860a\u6db5\u6700\u57fa\u672c\u7684\u65b9\u6cd5\u5c31\u85c9\u7531\u53e5\u5b50\u5b57\u9762\u4e0a\u7684\u8cc7\u8a0a\u4f8b\u5982\u8a9e\u610f\u3001\u53e5\u6cd5(Hua \u90e8\u4f4d\u3002
& Dinga, 2011)\u7b49\u7b49\u5b57\u9762\u4e0a\u7684\u76f8\u4f3c\u6027\u9032\u800c\u63a8\u65b7\u53e5\u5b50\u662f\u5426\u6709\u8457\u860a\u6db5\u95dc\u4fc2\u3002\u56e0\u6b64\u5229\u7528\u9019\u500b\u7279\u6027\uff0c \u96d9\u5411\u860a\u6db5 t1\uff1a\u6d0b\u57fa\u7403\u5718\u4fdd\u8b77\u9078\u624b\u7684\u7acb\u610f\u751a\u7be4\uff0c\u8981\u6c42\u300c\u53ea\u6295\u4e00\u5834\u4e14\u4e0d\u8d85\u904e\u4e00\u767e\u7403\u300d\u3002
\u6587\u5b57\u860a\u6db5\u6709\u52a9\u65bc\u554f\u7b54\u7cfb\u7d71\u627e\u5230\u8cc7\u6599\u5eab\u4e2d\u8207\u8f38\u5165\u554f\u53e5\u6700\u76f8\u8fd1\u7684\u554f\u53e5\u9032\u800c\u56de\u61c9\u6700\u9069\u7576\u56de\u7b54\u3002 (B) t2\uff1a\u6d0b\u57fa\u7403\u968a\u958b\u51fa\u300c\u53ea\u6295\u4e00\u5834\u4e14\u4e0d\u8d85\u904e\u4e00\u767e\u7403\u300d\u7684\u4fdd\u8b77\u689d\u4ef6\u3002
\u4ee5\u8cc7\u8a0a\u6aa2\u7d22\u4f86\u8aaa\u6aa2\u7d22\u8a5e\u7684\u597d\u58de\u5c0d\u8cc7\u8a0a\u6aa2\u7d22\u6709\u8457\u5f88\u5927\u5f71\u97ff\uff0c\u85c9\u7531\u6587\u5b57\u860a\u6db5\u627e\u5230\u8207\u6aa2\u7d22\u8a5e\u76f8 \u95dc\u7684\u5b57\u8a5e(\u4f8b\u5982\u540c\u7fa9\u8a5e)\u52a0\u5165\u6aa2\u7d22\u689d\u4ef6\u9019\u6a23\u53ef\u4ee5\u8b93\u4f7f\u7528\u8005\u66f4\u5bb9\u6613\u627e\u5230\u4f7f\u7528\u8005\u6240\u9700\u8981\u7684\u8cc7 t1\uff1a\u5370\u5c3c\u8607\u9580\u7b54\u81d8\u897f\u5cb8\u5916\u6d77\u767c\u751f\u82ae\u6c0f\u898f\u6a21\u4e5d\u7684\u5f37\u9707\uff0c\u4e3b\u9707\u8207\u9918\u9707\u6240\u5f15\u767c\u7684 \u77db\u76fe\u860a\u6db5 \u6d77\u562f\u5e2d\u6372\u5357\u4e9e\u8af8\u570b\u7684\u6d77\u5cb8\u3002
\u8a0a\u3002(C)t2\uff1a\u5370\u5c3c\u8607\u9580\u7b54\u81d8\u5317\u90e8\u5916\u6d77\u5eff\u516b\u65e5\u6df1\u591c\u767c\u751f\u82ae\u6c0f\u898f\u6a21\u516b\u9ede\u4e03\u7684\u5f37\u9707\u3002
\u7368\u7acb\u860a\u6db5 (I) 2. \u904e\u53bb\u7814\u7a76 t1\uff1a\u4e2d\u7814\u9662\u57fa\u56e0\u9ad4\u4e2d\u5fc3\u6b63\u548c\u4f55\u5927\u4e00\u5408\u4f5c\uff0c\u7814\u767c\u65b0\u4e00\u4ee3\u7684\u79bd\u6d41\u611f\u57fa\u56e0\u75ab\u82d7\u3002 t2\uff1a\u4e2d\u7814\u9662\u9662\u58eb\u4f55\u5927\u4e00\u6700\u8fd1\u6b63\u5728\u7814\u767c\u79bd\u6d41\u611f\u57fa\u56e0\u75ab\u82d7\u3002 \u4e4b\u524d\u7684\u7814\u7a76\u6587\u737b\u4e2d\u6709\u8a31\u591a\u4e0d\u540c\u7684\u65b9\u6cd5\u61c9\u7528\u5728\u82f1\u6587\u6587\u5b57\u860a\u6db5\u8b58\u5225\uff0c\u4f8b\u5982\u5b9a\u7406\u8b49\u660e\u6216\u4f7f\u7528 WordNet \u7b49\u7b49\u4e0d\u540c\u7684\u8a5e\u610f\u8a9e\u6599\u8cc7\u6e90\u3002\u5728\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u65b9\u9762(Wu et al., 2011)\u7b49\u4eba\u53c3\u8003\u5176\u4ed6 \u8a9e\u8a00\u7684\u65b9\u6cd5\u63d0\u51fa\u4e00\u500b\u57fa\u790e\u6a5f\u5668\u5b78\u7fd2\u5229\u7528\u6a5f\u5668\u7ffb\u8b6f\u6548\u80fd\u8a55\u4f30\u7684 BLEU \u5206\u6578\u53ca\u53e5\u5b50\u9577\u5ea6\u505a\u70ba \u7279\u5fb5\u4ee5\u53ca\u53e5\u5b50\u9577\u5ea6\u505a\u70ba\u7279\u5fb5\u4f86\u8a13\u7df4\u5206\u985e\u5668\uff0c\u5efa\u7acb\u57fa\u790e\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u8b58\u5225\u7cfb\u7d71\uff0c(Zhang et \u76ee\u524d\u6587\u5b57\u860a\u6db5\u7684\u7814\u7a76\u5206\u6210\u5169\u7a2e\u5c64\u9762\uff0c\u9996\u5148\u5169\u985e(\u6211\u5011\u5c31\u7a31\u70ba\u96d9\u5411\u860a\u6db5\u95dc\u4fc2\u3002\u5047\u8a2d\u53e5\u5b50\u5c0d\u4e4b\u9593\u6c92\u6709\u860a\u6db5\u95dc\u4fc2\uff0c\u6211\u5011\u53ef\u4ee5\u5f88\u5408\u7406\u8a8d\u70ba\u5169\u500b\u53e5 al., 2011)\u7b49\u4eba\u63d0\u51fa\u52a0\u5165\u8a9e\u610f\u76f8\u95dc\u8cc7\u8a0a\u4f5c\u70ba\u7279\u5fb5\u8655\u7406\uff0c\u85c9\u7531\u4e0a\u4e0b\u4f4d\u8a5e\u3001\u540c\u7fa9\u8a5e\u8207\u53cd\u7fa9\u8a5e\u7b49
\u5b50\u6240\u8868\u9054\u7684\u610f\u601d\u4e0d\u76f8\u540c\uff0c\u4f46\u9019\u4e26\u4e0d\u5b8c\u5168\u6b63\u78ba\u7684\u60f3\u6cd5\u3002\u5982\u540c\u8868 1 \u77db\u76fe\u4f8b\u53e5\u4e00\u6a23\u53ef\u80fd\u5169\u500b\u53e5 \u8cc7\u8a0a\u4f86\u9032\u884c\u7684\u63a8\u8ad6\u4ee5\u53ca\u4f7f\u7528\u591a\u500b\u6a5f\u5668\u5b78\u7fd2\u7684\u65b9\u6cd5\uff0c\u6700\u5f8c\u4f7f\u7528\u6295\u7968\u6a5f\u5236\u9078\u51fa\u6700\u5408\u9069\u860a\u6db5\u95dc
\u5b50\u6240\u5305\u6db5\u7684\u8cc7\u8a0a\u5927\u81f4\u76f8\u540c\u53ea\u662f\u5c11\u90e8\u4efd\u8cc7\u8a0a\u4f8b\u5982:\u662f\u8207\u4e0d\u662f\u6216\u662f\u6642\u9593\u9ede\u4e0d\u540c\u9020\u6210\u53e5\u5b50\u7684\u610f \u4fc2\u63d0\u9ad8\u7cfb\u7d71\u6e96\u78ba\u7387\u3002
", "html": null, "type_str": "table", "text": "\u7dd2\u8ad6 \u6587\u5b57\u860a\u6db5(Textual Entailment, TE)(Dagan et al., 2006)\u662f\u81ea\u7136\u8a9e\u8a00\u8655\u7406(Natural Language Processing, NLP)\u6700\u8fd1\u8208\u8d77\u7814\u7a76\u8b70\u984c\uff0c\u6587\u5b57\u860a\u6db5\u8b58\u5225\u76ee\u6a19\u70ba\u7d66\u5b9a\u4e00\u500b\u53e5\u5b50\u5c0d(T1,T2)\u7cfb\u7d71\u80fd Binary Class, BC)\u7684\u4efb\u52d9\u7684\u76ee\u6a19\u662f\u55ae \u7d14\u5224\u5225 T1 \u8207 T2 \u4e4b\u9593\u662f\u5426\u6709\u860a\u6db5\u95dc\u4fc2\uff0c\u4f46\u53e5\u5b50\u4e4b\u9593\u860a\u6db5\u95dc\u4fc2\u4e26\u4e0d\u80fd\u55ae\u7d14\u4ee5\u6709\u6216\u6c92\u6709\u9019\u9ebc \u7c21\u55ae\u5c31\u5340\u5206\u958b\uff0c\u56e0\u6b64\u70ba\u4e86\u8868\u793a\u4e0d\u540c\u60c5\u6cc1\u4e0b\u7684\u53e5\u5b50\u4e4b\u9593\u860a\u6db5\u95dc\u4fc2\uff0cNTCIR RITE \u53e6\u5916\u5b9a\u7fa9 \u591a\u985e(Multi Class, MC)\u9019\u9805\u4efb\u52d9\u5c07\u53e5\u5b50\u4e4b\u9593\u7684\u860a\u6db5\u4f5c\u66f4\u70ba\u660e\u78ba\u7684\u5206\u985e\u3002\u5047\u8a2d\u9019\u500b\u53e5\u5b50\u5c0d\u5177 \u6709\u860a\u6db5\u95dc\u4fc2\uff0c\u6211\u5011\u53ef\u4ee5\u5f88\u5408\u7406\u8a8d\u70ba\u9019\u5169\u500b\u53e5\u5b50\u6240\u8868\u9054\u662f\u76f8\u540c\u7684\u610f\u601d\uff0c\u4f46\u6709\u53ef\u80fd\u5169\u500b\u53e5\u5b50 \u5982\u8868 1 \u4e2d\u6b63\u5411\u860a\u6db5\u7684\u4f8b\u53e5\u4e00\u6a23\u5169\u500b\u53e5\u5b50\u6240\u5305\u6db5\u7684\u8cc7\u8a0a\u6578\u91cf\u4e0d\u540c\uff0c\u9020\u6210\u6211\u5011\u53ef\u4ee5\u5f9e T1 \u53e5 \u5b50\u53ef\u4ee5\u63a8\u8ad6\u51fa T2 \u53e5\u5b50\u7684\u5b8c\u6574\u7684\u610f\u601d\uff0c\u4f46\u662f\u4e0d\u80fd\u5f9e T2 \u63a8\u8ad6\u51fa T1 \u53e5\u5b50\u5b8c\u6574\u7684\u610f\u601d\uff0c\u9019\u6a23 \u60c5\u6cc1\u6211\u5011\u5c31\u7a31\u6b63\u5411\u860a\u6db5\u3002\u53cd\u4e4b\u5982\u8868 1 \u4e2d\u96d9\u5411\u860a\u6db5\u7684\u4f8b\u53e5\u4e00\u6a23 T1 \u53e5\u5b50\u53ef\u4ee5\u63a8\u8ad6\u51fa T2 \u53e5\u5b50 \u7684\u542b\u610f\uff0cT2 \u4e5f\u53ef\u4ee5\u63a8\u8ad6\u51fa T1 \u53e5\u5b50\u5b8c\u6574\u7684\u610f\u601d\uff0c\u5169\u500b\u53e5\u5b50\u4e4b\u9593\u53ef\u4ee5\u76f8\u4e92\u63a8\u8ad6\u9019\u6a23\u7684\u60c5\u6cc1 Yang et al., 2012)\u7b49\u4eba\u63d0\u51fa\u4f7f\u7528\u55ae\u8a9e\u8a00\u6a5f\u5668\u7ffb\u8b6f\u7684\u65b9\u6cd5\uff0c\u85c9\u7531 GIZA++\u4e2d\u5b57\u8a5e\u5c0d \u9f4a\u7684\u5339\u914d\u503c\u9032\u884c\u8a08\u7b97\u51fa\u53e5\u5b50\u4e4b\u9593\u76f8\u4f3c\u5ea6\u4f5c\u70ba\u65b0\u7279\u5fb5\u4f7f\u7528\u4ee5\u6709\u6548\u63d0\u5347\u6b63\u78ba\u7387\uff0c\u4f46\u9019\u500b\u65b9\u6cd5 \u5728\u8655\u7406\u5169\u985e\u4e2d\u6548\u679c\u8f03\u597d\u518d\u8655\u7406\u591a\u985e\u860a\u6db5\u95dc\u4fc2\u6642\u6548\u679c\u4e0d\u5982\u5169\u985e\uff0c\u4e4b\u5f8c" }, "TABREF2": { "num": null, "content": "
8 10\u860a\u6db5\u53e5\u578b\u5206\u6790\u65bc\u6539\u9032\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u8b58\u5225\u7cfb\u7d71 \u860a\u6db5\u53e5\u578b\u5206\u6790\u65bc\u6539\u9032\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u8b58\u5225\u7cfb\u7d71 \u860a\u6db5\u53e5\u578b\u5206\u6790\u65bc\u6539\u9032\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u8b58\u5225\u7cfb\u7d71\u694a\u5584\u9806 \u7b49 \u694a\u5584\u9806 \u7b49 9 11 13
\u8f38\u5165\u53e5\u5b50\u4e4b\u9593\u7684\u5339\u914d\u5f97\u5206\uff0c\u5728\u4f7f\u7528\u4e0b\u5217\u516c\u5f0f\u8a08\u7b97\u6700\u5f8c\u61c9\u7528\u5728 SVM \u4e0a\u7684\u7279\u5fb5\u503c\uff1a \u8868 4. \u5bb9\u6613\u8aa4\u5224\u554f\u984c\u985e\u578b\u4f8b\u5b50 4.3 \u6578\u5b57\u8cc7\u8a0a\u4e0d\u4e00\u81f4 \u6790\u53d6\u5f97\u53e5\u5b50\u7684\u5256\u6790\u6a39\u61c9\u7528\u7de8\u8f2f\u8ddd\u96e2\u7684\u65b9\u6cd5\u4f86\u53d6\u5f97\u53e5\u5b50\u91cd\u8981\u8cc7\u8a0a\u662f\u5426\u6709\u6240\u6539\u8b8a\u6216\u76f8\u540c\u4f86\u8a8d \u5169\u53e5\u7531\u65bc\u53e5\u5b50\u8abf\u63db\u9020\u6210\u4e3b\u8a5e\u8207\u53d7\u8a5e\u95dc\u4fc2\u6539\u8b8a\u8868\u9054\u7684\u610f\u601d\u4e5f\u6709\u6240\u6539\u8b8a\u3002 5.2.1 \u6539\u9032\u5be6\u9a57\u4e00
\uf07b log \uf0d5 \u4f8b\u53e5 \u8a13\u7df4\u8cc7\u6599\u4e2d\u6709\u4e9b\u53e5\u5b50\u4e2d\u542b\u6709\u6578\u5b57\u7684\u8cc7\u8a0a\uff0c\u7136\u800c\u9019\u4e9b\u6578\u5b57\u8cc7\u8a0a\u5c0d\u53e5\u5b50\u7684\u542b\u610f\u6709\u5f88\u5927\u7684\u5f71\u97ff\uff0c \uf07d , max , 0 ( 1 | 2 ) i j i j i j p t t n n p \uf03d \uf03d \uf0e9 \uf0eb \u7de8\u865f \u5b9a\u53e5\u5b50\u662f\u5426\u70ba\u7e2e\u6e1b\u95dc\u4fc2\u3002 T1 \u99ae\u5c0f\u525b\u8207\u548c\u9673\u51f1\u6b4c\uff0c\u5435\u7684\u6c92\u5b8c\u4e86\u6c92\u4e86 \u6211\u5011\u7cfb\u7d71\u7c21\u9ad4\u4e2d\u6587\u8207\u7e41\u9ad4\u4e2d\u6587\u4f7f\u7528\u76f8\u540c\u7684\u5be6\u9a57\u6d41\u7a0b\u4f46\u662f\u5982\u8868 7 \u548c\u8868 8 \u6240\u793a\uff0c\u5be6\u9a57\u7d50\u679c\u6709 \uf03d (1) \u516c\u5f0f\u4e2d 1 i t \u8207 2 j t \u5206\u5225\u4ee3\u8868 T1 \u8207 T2 \u53e5\u5b50\u4e2d\u5404\u5b57\u8a5e\uff0c ( 1 | 2 ) i j p t t \u4ee3\u8868 1 i t \u8207 2 j t \u5b57\u8a5e\u5c0d\u9f4a\u7684\u6a5f Case1 T1: \u6d41\u611f\u75c5\u6bd2\u53ef\u5728\u4eba\u4f53\u5916\u5b58\u6d3b\u4e09\u5230\u516d\u5c0f\u65f6 T2: \u51a0\u72b6\u75c5\u6bd2\u901a\u5e38\u53ef\u5728\u4eba\u4f53\u5916\u5b58\u6d3b\u4e8c\u5230\u4e09\u5c0f\u65f6 \u56e0\u6b64\u6211\u5011\u8a8d\u70ba\u61c9\u8a72\u5c07\u9019\u4e9b\u53e5\u5b50\u6311\u9078\u51fa\u4f86\u91dd\u5c0d\u53e5\u5b50\u4e2d\u7684\u6578\u5b57\u90e8\u4efd\u6bd4\u8f03\uff0c\u5c0d\u9019\u7a2e\u985e\u578b\u7684\u53e5\u5b50 \u5728\u4e4b\u524d\u7cfb\u7d71\u524d\u8655\u7406\u90e8\u5206\u6703\u5c0d\u53e5\u5b50\u4e2d\u7684\u6578\u5b57\u683c\u5f0f\u505a\u6b63\u898f\u5316\uff0c\u91dd\u5c0d\u6578\u5b57\u90e8\u5206\u9032\u884c\u6bd4\u5c0d\u5c31\u53ef\u4ee5 \u8655\u7406\u9019\u985e\u5927\u90e8\u5206\u7684\u53e5\u5b50\u3002 4.10 \u908f\u8f2f\u63a8\u7406 \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u8cc7\u6599\u6642\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u53ef\u4ee5\u5f9e\u53e5\u5b50\u7684\u8cc7\u8a0a\u53ef\u4ee5\u5408\u7406\u63a8\u7406\u51fa\u8207\u53e6\u4e00\u500b\u53e5\u5b50\u7684\u542b \u4e00\u8a5e\u591a\u7fa9\u8207 \u8457\u986f\u8457\u7684\u4e0d\u540c\uff0c\u96d6\u7136\u7c21\u9ad4\u4e2d\u6587\u8207\u7e41\u9ad4\u4e2d\u6587\u4f7f\u7528\u7684\u6e2c\u8a66\u8cc7\u6599\u4e0d\u540c\uff0c\u53ef\u80fd\u6703\u9020\u6210\u5169\u500b\u5be6\u9a57\u7d50 T2\u300a\u6c92\u5b8c\u6c92\u4e86\u300b\u662f\u4e00\u90e8\u7531\u99ae\u5c0f\u525b\u5c0e\u6f14 \u547d\u540d\u5be6\u9ad4 \u679c\u6709\u6240\u4e0d\u540c\uff0c\u4f46\u6211\u5011\u8a8d\u70ba\u4e0d\u53ea\u9019\u500b\u56e0\u7d20\u53ef\u4ee5\u9020\u6210\u5982\u6b64\u5927\u7684\u5dee\u7570\u3002\u6bd4\u8f03\u5169\u7a2e\u4e2d\u6587\u7684\u5be6\u9a57\u6d41 T1 \u8207 T2 \u4e2d\u90fd\u51fa\u73fe\"\u6c92\u5b8c\u6c92\u4e86\"\u9019\u500b\u8a5e\uff0c\u7136\u800c\u6240\u5177\u6709\u7684\u6db5\u7fa9\u5927\u5927\u4e0d\u540c\u3002 \u7a0b\uff0c\u6211\u5011\u7684\u7cfb\u7d71\u8655\u7406\u7e41\u9ad4\u4e2d\u6587\u76f8\u5c0d\u65bc\u7c21\u9ad4\u4e2d\u6587\u591a\u4e86\u5c07\u7e41\u9ad4\u4e2d\u6587\u7ffb\u8b6f\u6210\u7c21\u9ad4\u4e2d\u6587\u7684\u6b65\u9a5f\uff0c \u7387\uff0c\u4f7f\u7528 GIZA++\u8a08\u7b97\u53e5\u5b50\u4e2d\u5b57\u8a5e\u5c0d\u9f4a\u6a5f\u7387\u5f8c\u9023\u4e58\u53d6 log \u5728\u9664\u4ee5\u9023\u4e58\u7684\u6b21\u6578 n \u5f8c\u5c31\u662f\u4f7f\u7528 \u5728 SVM \u7684\u7279\u5fb5\u503c n p \u3002 \u8868 3. \u5206\u985e\u5668\u4f7f\u7528\u7684\u7279\u5fb5 \u7de8\u865f \u7279\u5fb5 1 Unigram recall 2 Unigram precision 3 Unigram F-measure 4 Log bleu recall 5 Log bleu precision 6 Log bleu F-measure 7 difference in sentence length (character) 8 absolute difference in sentence length (character) 9 difference in sentence length (term) 10 absolute difference in sentence length (term) 11 GIZA++ Case2 T1: \u5927\u9646\u5df2\u6709\u56db\u767e\u4e07\u4eba\u611f\u67d3\u7231\u6ecb\u75c5 T2: \u5927\u9646\u6709\u516b\u5341\u4e94\u4e07\u4eba\u611f\u67d3\u827e\u6ecb\u75c5 Case3 T1: \u7f8e\u56fd\u5949\u884c\u4e00\u4e2d\u653f\u7b56\u548c\u9075\u5b88\u4e09\u516c\u62a5\u7684\u7acb\u573a\u5e76\u672a\u6539\u53d8\uff1b\u5207\u5c3c\u5219\u8fdb\u4e00\u6b65\u8868\u793a\u4e0d\u652f \u6301\u53f0\u6e7e\u72ec\u7acb T2: \u7f8e\u56fd\u4e0d\u652f\u6301\u53f0\u6e7e\u8d70\u5411\u72ec\u7acb Case4 4.4 \u5dee\u7570\u4e00\u8a5e \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u8cc7\u6599\u6642\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u5b57\u9762\u4e0a\u76f8\u7576\u76f8\u4f3c\uff0c\u4f8b\u5982\u4e3b\u8a5e\u6216\u53d7\u8a5e\u6216\u662f\u90e8\u4efd\u5b57\u8a5e\u88ab\u66ff \u63db\u5c31\u6703\u5c0e\u81f4\u53e5\u5b50\u7684\u610f\u601d\u90fd\u6539\u8b8a\u4e86\uff0c\u91dd\u5c0d\u9019\u985e\u578b\u7684\u53e5\u5b50\u53ef\u4ee5\u5c0d\u53e5\u5b50\u5b57\u8a5e\u9032\u884c\u6bd4\u5c0d\u8655\u7406\u3002 \u7576\u7136\u7279\u6b8a\u985e\u578b\u7684\u554f\u984c\u4e0d\u6b62\u4e0a\u8ff0\u7684\u5e7e\u7a2e\uff0c\u6211\u5011\u4e5f\u6b78\u7d0d\u51fa\u66f4\u591a\u7279\u6b8a\u985e\u578b\u6709\u5f85\u672a\u4f86\u5b8c\u6210\u3002 \u610f\uff0c\u9019\u6a23\u7684\u53e5\u5b50\u7684\u6211\u5011\u521d\u6b65\u5c07\u5b83\u5206\u985e\u5728\u53e5\u5b50\u63a8\u7406\u9019\u4e00\u985e\u3002 T1 \u5728 1964 \u5e74 10 \u6708\u4e2d\u5171\u6210\u529f\u8a66\u7206\u7b2c\u4e00\u9846\u539f\u5b50\u5f48\u5f8c\uff0c\u52a0\u5feb\u4e86\u4e2d\u570b\u672c\u8eab\u7684\u6838 \u6df1\u5165\u7814\u7a76\u7ffb\u8b6f\u904e\u5f8c\u7e41\u9ad4\u4e2d\u6587\u6e2c\u8a66\u8cc7\u6599\u53ef\u4ee5\u767c\u73fe\u7ffb\u8b6f\u6548\u679c\u4e0d\u4f73\u9020\u6210\u4e4b\u5f8c\u62bd\u53d6\u7279\u5fb5\u6642\u5f97\u5230\u932f \u5b50\u5de5\u696d\u767c\u5c55 \u8aa4\u7684\u6578\u503c\u9020\u6210\u4e4b\u5f8c SVM \u7684\u8aa4\u5224\uff0c\u56e0\u6b64\u6539\u4f7f\u7528\u6211\u5011\u81ea\u884c\u958b\u767c\u7684\u6a5f\u5668\u7ffb\u8b6f\u7cfb\u7d71\uff0c\u89e3\u6c7a\u4e4b\u524d \u8868 5. \u7279\u6b8a\u985e\u578b\u4f8b\u5b50 \u985e\u578b \u53e5\u5b50\u7e2e\u6e1b T2 1964 \u5e74\u4e2d\u5171\u7b2c\u4e00\u6b21\u8a66\u7206\u539f\u5b50\u5f48 \u7ffb\u8b6f\u932f\u8aa4\u7522\u751f\u7684\u7a7a\u683c\u8207\u8853\u8a9e\u932f\u8aa4\u7684\u554f\u984c\u63d0\u9ad8\u7cfb\u7d71\u6548\u80fd\uff0c\u66ff\u63db\u5f8c\u7684\u5be6\u9a57\u7d50\u679c\u5982\u8868 9 \u6240\u793a\uff0c \u4f8b\u53e5 \u80af\u5b9a/\u5426\u5b9a\u53e5 T1 \u7684\u53e5\u5b50\u90e8\u4efd\u8cc7\u8a0a\u88ab\u522a\u6389\u4f46\u662f\u4e0d\u5f71\u97ff T1 \u4e3b\u8981\u60f3\u8868\u9054\u7684\u542b\u610f\u4f9d\u7136\u8207 T2 \u5728\u5169\u985e(BC)\u4efb\u52d9\u63d0\u9ad8 6.02 \u6b63\u78ba\u7387\u4ee5\u53ca\u591a\u985e(MC)\u4efb\u52d9\u5247\u662f\u63d0\u9ad8 9.49 \u6b63\u78ba\u7387\u3002 T1 319 \u69cd\u64ca\u6848\u4e2d\uff0c\u9673\u7e3d\u7d71\u53d7\u69cd\u64ca\u6642\u6c92\u5fd8\u4e86\u7a7f\u9632\u5f48\u8863\u3002 \u542b\u610f\u76f8\u540c\uff0c\u56e0\u6b64\u9019\u662f\u6b63\u5411\u860a\u6db5\u95dc\u4fc2\u3002 \u8868 9. \u66ff\u63db\u7c21\u7e41\u8f49\u63db\u7cfb\u7d71\u5be6\u9a57\u7d50\u679c T2 319 \u69cd\u64ca\u6848\u4e2d\uff0c\u9673\u7e3d\u7d71\u53d7\u69cd\u64ca\u6642\u6c92\u7a7f\u9632\u5f48\u8863\u3002 T1 \u5f35\u5b78\u826f 1900 \u5e74 6 \u6708 3 \u65e5\u51fa\u751f\uff0c1920 \u5e74\u5b98\u62dc\u5c11\u5c07 \u7c21\u7e41\u8f49\u63db\u7cfb\u7d71 BC (%) MC (%) T1: \u7533\u5965\u6210\u529f\u7684\u4f26\u6566\u5f53\u5c40\uff0c\u5728\u7206\u70b8\u6848\u540e\u7acb\u5373\u5ba3\u5e03\u53d6\u6d88\u5e86\u795d\u6d3b\u52a8\u3002 T2: \u82f1\u56fd\u5df2\u505c\u6b62\u6240\u6709\u5e86\u795d\u7533\u5965\u6210\u529f\u7684\u6d3b\u52a8\u3002 Case5 T1: \u6211\u56fd\u751f\u7269\u6280\u672f\u53ef\u4ee5\u4e0e\u7f8e\u56fd\u7b49\u5148\u8fdb\u56fd\u5bb6\u76f8\u63d0\u5e76\u8bba\uff0c\u89e3\u51b3\"\u5f02\u79cd\u6838\u8f6c\u6b96\"\u7684\u95ee \u9898\u5e76\u4e0d\u96be\u3002 4.5 \u540c\u7fa9\u8a5e \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u8cc7\u6599\u6642\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u5c0d\u90e8\u4efd\u7684\u5b57\u8a5e\u6709\u8457\u540c\u7fa9\u8a5e\u7684\u95dc\u4fc2\uff0c\u5c0d\u65bc\u9019\u985e\u7684\u53e5\u5b50\u5c0d \u61c9\u8a72\u5c0d\u53e5\u5b50\u5148\u9032\u884c\u8655\u7406\uff0c\u4f7f\u7528 E-hownet \u7b49\u8a9e\u6599\u5eab\u5c07\u540c\u7fa9\u8a5e\u90fd\u66ff\u63db\u6210\u76f8\u540c\u5b57\u8a5e\u5c31\u53ef\u4ee5\u4f7f\u5169 \u5169\u53e5\u5167\u5bb9\u5927\u90fd\u76f8\u540c\u4f46\u56e0\u70ba T1 \u53e5\u5b50\u542b\u6709\u53cd\u7fa9\u8a5e\u9020\u6210\u6587\u53e5\u5c0d\u6709\u8457\u77db\u76fe\u95dc\u4fc2\u3002 T2 \u5f35\u5b78\u826f\u5eff\u6b72\u5b98\u62dc\u5c11\u5c07 Google \u7ffb\u8b6f 53.64 26.26 \u908f\u8f2f\u63a8\u7406 T1 \u8331\u8389\u5b89\u5fb7\u9b6f\u7d72 1930 \u5e74\u51fa\u751f \u5f9e T1 \u53e5\u5b50\u53ef\u4ee5\u5f97\u5230\u95dc\u9375\u5b57\u9032\u800c\u53ef\u4ee5\u63a8\u7406\u51fa T2 \u53e5\u5b50\u7684\u5167\u5bb9\uff0c\u56e0\u6b64\u9019\u662f\u4e00 CYUT \u958b\u767c\u7cfb\u7d71 59.54(+6.02) 35.75(+9.49) \u6642\u9593\u8cc7\u8a0a\u4e0d T2 1935 \u5e74\u51fa\u751f\u65bc\u82f1\u570b\u7684\u8331\u8389\u2027\u5b89\u5fb7\u9b6f\u7d72\u3002 \u500b\u6b63\u5411\u860a\u6db5\u95dc\u4fc2\u3002 \u4e00\u81f4 \u500b\u53e5\u5b50\u8b8a\u5f97\u66f4\u76f8\u4f3c\uff0c\u9019\u6a23\u5224\u65b7\u860a\u6db5\u95dc\u4fc2\u53ef\u4ee5\u66f4\u5bb9\u6613\u3002 \u5169\u53e5\u7531\u65bc\u6642\u9593\u6709\u6240\u4e0d\u540c\u9020\u6210\u53e5\u5b50\u8868\u9054\u7684\u610f\u601d\u4e92\u76f8\u77db\u76fe\u3002 \u9019\u4e9b\u65b0\u7684\u7279\u6b8a\u985e\u578b\u76ee\u524d\u9700\u8981\u4f7f\u7528\u4eba\u5de5\u7684\u65b9\u5f0f\u6311\u9078\u51fa\u4f86\uff0c\u5728\u672a\u4f86\u5c07\u958b\u767c\u81ea\u52d5\u5206\u985e\u7cfb\u7d71\uff0c 5.2.2 \u6539\u9032\u5be6\u9a57\u4e8c T2: \u6211\u56fd\u751f\u7269\u6280\u672f\u53ef\u80fd\u9020\u6210\u5f02\u79cd\u6838\u8f6c\u6b96\u7b49\u95ee\u9898\u3002 4. \u860a\u6db5\u53e5\u578b\u5206\u6790 \u70ba\u4e86\u89e3\u6c7a\u9019\u4e9b\u5bb9\u6613\u8aa4\u5224\u7684\u554f\u984c\uff0c\u6211\u5011\u8a8d\u70ba\u61c9\u5c07\u53e5\u5b50\u4f9d\u7167\u554f\u984c\u985e\u578b\u9032\u884c\u5206\u985e\u518d\u5404\u81ea\u4f7f\u7528\u6700 \u9069\u5408\u7684\u65b9\u6cd5\u9032\u884c\u8655\u7406\u3002\u5728\u7cfb\u7d71\u8655\u7406\u5b8c\u9810\u8655\u7406\u5f8c\uff0c\u5c07\u53ef\u4ee5\u7279\u6b8a\u985e\u578b\u7684\u53e5\u5b50\u6311\u9078\u51fa\u4f86\uff0c\u4f7f\u7528 4.6 \u80cc\u666f\u77e5\u8b58 \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u8cc7\u6599\u6642\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u63d0\u4f9b\u7684\u8cc7\u8a0a\u5373\u4f7f\u662f\u4eba\u985e\uff0c\u5728\u6c92\u6709\u4e00\u5b9a\u7684\u80cc\u666f\u77e5\u8b58\u60c5\u6cc1 \u4e0b\u4e5f\u7121\u6cd5\u6b63\u78ba\u7684\u5224\u65b7\u662f\u5426\u6709\u860a\u6db5\u95dc\u4fc2\uff0c\u4f8b\u5982\u65b0\u5fb7\u91cc\u662f\u5370\u5ea6\u7684\u9996\u90fd\u6c92\u6709\u9019\u6a23\u7684\u80cc\u666f\u77e5\u8b58\u5f88 \u5bb9\u6613\u5c07\u65b0\u5fb7\u91cc\u8207\u5370\u5ea6\u8a8d\u70ba\u6210\u4e0d\u540c\u7684\u5730\u65b9\uff0c\u5c0d\u65bc\u9019\u985e\u578b\u7684\u53e5\u5b50\u53ef\u4ee5\u4f7f\u7528\u5916\u90e8\u8cc7\u6e90\u4f8b\u5982\u7dad\u57fa \u767e\u79d1\u7b49\u7372\u5f97\u5c0d\u61c9\u7684\u80cc\u666f\u77e5\u8b58\u4f5c\u8655\u7406\u3002 T1 \u5a1c\u62c9\u63d0\u8afe\u5a03\u4e00\u5171\u7372\u5f97 18 \u5ea7\u5927\u6eff\u8cab\u91d1\u676f\u3002 \u7136\u800c\u6709\u90e8\u5206\u7279\u6b8a\u985e\u578b\u4e26\u4e0d\u50cf\u4e4b\u524d\u5df2\u5b8c\u6210\u7684\u985e\u578b\u53ef\u4ee5\u8f15\u6613\u7a0b\u5f0f\u5316\u8655\u7406\uff0c\u4f8b\u5982:\u908f\u8f2f\u63a8\u7406\u3001\u80cc \u5728\u524d\u9762\u7ae0\u7bc0\u63d0\u5230\u7279\u6b8a\u985e\u578b\u554f\u984c\u662f\u904e\u53bb\u7cfb\u7d71\u6240\u7121\u6cd5\u9069\u7576\u8655\u7406\u7684\u554f\u984c\uff0c\u56e0\u6b64\u91dd\u5c0d\u7279\u6b8a\u985e\u578b\u554f \u6578\u5b57\u8cc7\u8a0a\u4e0d T2 \u5a1c\u62c9\u63d0\u6d1b\u5a03\u4e00\u5171\u53d6\u5f97\u4e86 58 \u500b\u5927\u6eff\u8cab\u7684\u91d1\u676f\u3002 \u666f\u77e5\u8b58\u3001\u540c\u7fa9\u8a5e\u7b49\u7279\u6b8a\u985e\u578b\u5fc5\u9808\u9700\u8981\u4f9d\u9760\u8a9e\u6599\u8cc7\u6599\u624d\u80fd\u5920\u6b63\u78ba\u7684\u5340\u5206\u51fa\u4f86\u3002 \u984c\u9032\u884c\u958b\u767c\u500b\u5225\u5c08\u5c6c\u7cfb\u7d71\u8655\u7406\u662f\u6709\u6240\u5fc5\u8981\uff0c\u672c\u6b21\u5c07\u5169\u985e(BC)\u4efb\u52d9\u4e2d\u7279\u6b8a\u985e\u578b\u554f\u984c\u53e5\u5b50\u5c0d \u4e00\u81f4 \u5982\u8868 6 \u6240\u793a RITE-2 \u8a13\u7df4\u8207\u6e2c\u8a66\u96c6\u8cc7\u6599\u4e2d\u6211\u5011\u6240\u5206\u985e\u6240\u6709\u7684\u7279\u6b8a\u985e\u578b\u53e5\u5b50\u4e2d\u7684\u6578\u91cf\uff0c \u62bd\u51fa\u4f86\u9032\u884c\u500b\u5225\u5be6\u9a57\uff0c\u5be6\u9a57\u7d50\u679c\u5982\u8868 10 \u6240\u793a\uff0c\u6211\u5011\u53ef\u4ee5\u767c\u73fe\u7279\u6b8a\u985e\u578b\u4f7f\u7528\u7279\u5225\u958b\u767c\u7684\u5b50 \u5169\u53e5\u7531\u65bc\u6578\u5b57\u6709\u6240\u4e0d\u540c\u9020\u6210\u53e5\u5b50\u8a0a\u606f\u4e92\u76f8\u77db\u76fe\u3002 \u7136\u800c\u7d71\u8a08\u7d50\u679c\u4e26\u975e\u5b8c\u5168\u6e96\u78ba\uff0c\u9019\u662f\u56e0\u70ba\u4e00\u500b\u53e5\u5b50\u5c0d\u53ef\u80fd\u540c\u6642\u64c1\u6709\u591a\u500b\u7279\u6b8a\u985e\u578b\u7279\u6027\u3002 \u7cfb\u7d71\u4f5c\u8655\u7406\u53ef\u4ee5\u63d0\u9ad8\u5176\u6b63\u78ba\u7387\u3002 T1 \u4e00\u4e5d\u4e03\u4e00\u5e74\uff0c\u5370\u5ea6\u5354\u52a9\u6771\u5df4\u57fa\u65af\u5766\u812b\u96e2\u5df4\u57fa\u65af\u5766\u6210\u70ba\u5b5f\u52a0\u62c9\uff0c\u7d50\u679c\u5370 \u5df4\u6230\u4e8b\u518d\u8d77\u3002 \u8868 6. \u7279\u6b8a\u985e\u578b\u5728\u8a13\u7df4\u96c6\u8207\u6e2c\u8a66\u96c6\u6db5\u84cb\u6578\u91cf \u8868 10. \u500b\u5225\u7279\u6b8a\u985e\u578b\u5b50\u7cfb\u7d71\u505a\u5169\u985e(BC)\u5be6\u9a57\u7d50\u679c \u6211\u5011\u958b\u767c\u7684\u5b50\u7cfb\u7d71\u505a\u8655\u7406\uff0c\u8655\u7406\u5f8c\u7684\u7d50\u679c\u5728\u8207\u904e\u53bb\u4f7f\u7528\u7684\u6a5f\u5668\u5b78\u7fd2\u65b9\u6cd5\u4f5c\u6574\u5408\uff0c\u5f97\u5230\u6700 \u5f8c\u7684\u7d50\u679c\uff0c\u4ee5\u4e0b\u662f\u6211\u5011\u4ee5\u5be6\u505a\u51fa\u4f86\u7684\u7279\u6b8a\u985e\u578b\uff0c\u8868 5 \u662f\u5c0d\u61c9\u7684\u4f8b\u53e5\u3002 4.7 \u53e5\u6cd5\u8abf\u63db \u5dee\u7570\u4e00\u8a5e \u7279\u6b8a\u985e\u578b \u8a13\u7df4\u96c6 \u6e2c\u8a66\u96c6 Case1 Case2 Case3 Case4 T2 \u4e00\u4e5d\u4e03\u4e00\u5e74\uff0c\u5370\u5ea6\u5354\u52a9\u897f\u5df4\u57fa\u65af\u5766\u812b\u96e2\u5df4\u57fa\u65af\u5766\u6210\u70ba\u5b5f\u52a0\u62c9\uff0c\u7d50\u679c\u5370 \u5df4\u6230\u4e8b\u518d\u8d77\u3002 \u80af\u5b9a/\u5426\u5b9a\u53e5 15(1.82%) \u80af\u5b9a/\u5426\u5b9a\u53e5 \u6642\u9593\u8cc7\u8a0a\u4e0d\u4e00\u81f4 \u6578\u5b57\u8cc7\u8a0a\u4e0d\u4e00\u81f4 \u5dee\u7570\u4e00\u8a5e 42(5.37%) 3.3 \u7279\u6b8a\u985e\u578b\u8655\u7406 \u6211\u5011\u89c0\u5bdf NTCIR10 RITE-2 \u6240\u63d0\u4f9b\u7684\u8a13\u7df4\u8cc7\u6599\uff0c\u767c\u73fe\u6709\u8a31\u591a\u7684\u53e5\u5b50\u662f\u904e\u53bb\u7684\u7cfb\u7d71\u6240\u7121\u6cd5 \u6b63\u78ba\u8655\u7406\u5bb9\u6613\u7522\u751f\u8aa4\u5224\u3002\u5982\u8868 4 \u6240\u793a\u6211\u5011\u6aa2\u8996 NTCIR10 RITE-2 \u6211\u7684\u5011\u7cfb\u7d71\u6b63\u5f0f\u8a55\u6e2c\u7d50 \u679c\u6574\u7406\u904e\u53bb\u7cfb\u7d71\u5bb9\u6613\u8aa4\u5224\u554f\u984c\u985e\u578b\u3002 \u5728\u8868 4 \u4e2d Caes1 \u662f\u4e00\u500b\u7368\u7acb\u95dc\u4fc2\u88ab\u7cfb\u7d71\u8aa4\u5224\u6210\u96d9\u5411\u860a\u6db5\u7684\u4f8b\u5b50\uff0c\u6211\u5011\u767c\u73fe\u5169\u500b\u53e5\u5b50 \u5b57\u9762\u4e0a\u975e\u5e38\u76f8\u4f3c\uff0c\u53ea\u662f\u56e0\u70ba\u7c97\u9ad4\u90e8\u4efd\u4e0d\u540c\u4f7f\u5169\u500b\u53e5\u5b50\u8b8a\u6210\u7368\u7acb\u95dc\u4fc2\uff0c\u5047\u5982\u5c0d\u53e5\u5b50\u4e2d\u55ae\u8a5e \u9032\u884c\u6bd4\u5c0d\u5373\u53ef\u89e3\u6c7a\u9019\u500b\u8aa4\u5224\u53ef\u80fd\u3002Caes2 \u662f\u4e00\u500b\u77db\u76fe\u95dc\u4fc2\u8aa4\u5224\u6210\u96d9\u5411\u860a\u6db5\u7684\u4f8b\u5b50\uff0c\u5f9e\u5b57 \u9762\u4e0a\u53ef\u4ee5\u767c\u73fe\u5169\u500b\u53e5\u5b50\u975e\u5e38\u76f8\u4f3c\uff0c\u53ea\u662f\u56e0\u70ba\u53e5\u5b50\u4e2d\u6578\u5b57\u8cc7\u8a0a\u4e0d\u76f8\u540c\u7522\u751f\u4e92\u76f8\u77db\u76fe\u95dc\u4fc2\u7684 \u60c5\u6cc1\uff0c\u5728\u6b64\u53ef\u4ee5\u77e5\u9053\u6211\u5011\u7cfb\u7d71\u5c0d\u65bc\u6578\u5b57\u8cc7\u8a0a\u6bd4\u5c0d\u4e0d\u8db3\u9019\u662f\u672a\u4f86\u6539\u9032\u7684\u65b9\u5411\uff0c\u5728\u89c0\u5bdf\u7cfb\u7d71 \u8aa4\u5224\u984c\u76ee\u6642\u767c\u73fe\u6709\u8a31\u591a\u5982 Caes3 \u9019\u7a2e\u53e5\u5b50\u4e2d\u5305\u542b\u5b57\u8a5e\u5b8c\u5168\u76f8\u540c\u5c31\u8aa4\u5224\u6210\u6b63\u5411\u7684\u60c5\u5f62\u7684\u5b58 \u5728\uff0cCaes4 \u7684\u4f8b\u5b50\u7531\u65bc\u6211\u5011\u7684\u7cfb\u7d71\u4e0d\u5177\u6709\u76f8\u95dc\u80cc\u666f\u77e5\u8b58\u53ef\u4ee5\u63a8\u8ad6\u53e5\u5b50\u4e2d\u7684\"\u502b\u6566\"\u5c31\u6307 4.1 \u80af\u5b9a/\u5426\u5b9a\u53e5 \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u96c6\u8cc7\u6599\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u5c0d\uff0c\u53e5\u5b50\u5305\u6db5\u7684\u8cc7\u8a0a\u5e7e\u4e4e\u5b8c\u5168\u76f8\u540c\uff0c\u53ea\u56e0\u70ba\u5426\u5b9a\u8a5e\u5c31 \u4f7f\u5f97\u5169\u500b\u53e5\u5b50\u5177\u6709\u662f\u5426\u95dc\u4fc2\u5f62\u6210\u77db\u76fe\u6216\u7368\u7acb\u860a\u6db5\u95dc\u4fc2\uff0c\u91dd\u5c0d\u9019\u7a2e\u985e\u578b\u7684\u53e5\u5b50\u5c0d\u6211\u5011\u53ef\u4ee5 \u4f7f\u7528\u81ea\u884c\u958b\u767c\u7684\u7cfb\u7d71\u9032\u884c\u53e5\u5b50\u5c0d\u662f\u5426\u5177\u6709\u662f\u5426\u95dc\u4fc2\u5b57\u8a5e\uff0c\u5047\u8a2d\u53e5\u5b50\u5c0d\u53ea\u6709\u4e00\u53e5\u5177\u6709\u662f\u5426 \u95dc\u4fc2\u5b57\u8a5e\uff0c\u9019\u6a23\u53ef\u4ee5\u8a8d\u70ba\u53e5\u5b50\u5c0d\u6c92\u6709\u860a\u6db5\u95dc\u4fc2\u3002 4.2 \u6642\u9593\u8cc7\u8a0a\u4e0d\u4e00\u81f4 \u8a13\u7df4\u8cc7\u6599\u4e4b\u4e2d\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u4e2d\u542b\u6709\u6642\u9593\u7684\u8cc7\u8a0a\uff0c\u7136\u800c\u9019\u4e9b\u6642\u9593\u7684\u8cc7\u8a0a\u5c0d\u53e5\u5b50\u7684\u542b\u610f\u6709\u5f88 \u91cd\u8981\u7684\u610f\u7fa9\uff0c\u6240\u4ee5\u6211\u5011\u8a8d\u70ba\u61c9\u8a72\u5c07\u9019\u4e9b\u53e5\u5b50\u6311\u9078\u51fa\u4f86\u91dd\u5c0d\u53e5\u5b50\u4e2d\u7684\u6642\u9593\u90e8\u4efd\u6bd4\u8f03\uff0c\u91dd\u5c0d \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u8cc7\u6599\u6642\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\uff0c\u53ea\u662f\u5c07\u53e5\u5b50\u7684\u9806\u5e8f\u9032\u884c\u8abf\u63db\u5c31\u5c0e\u81f4\u53e5\u5b50\u7684\u542b\u610f\u6709\u6240 \u6539\u8b8a\uff0c\u7576\u53e5\u5b50\u4e2d\u7684\u4e3b\u8a5e\u8207\u53d7\u8a5e\u6709\u6240\u6539\u8b8a\u6703\u5c0e\u81f4\u6574\u500b\u53e5\u5b50\u610f\u601d\u6574\u500b\u6539\u8b8a\uff0c\u91dd\u5c0d\u9019\u985e\u578b\u7684\u53e5 \u5b50\u53ef\u4ee5\u4f7f\u7528\u5982\u53f2\u4e39\u4f5b\u5256\u6790\u5668\u5c07\u53e5\u5b50\u9032\u884c\u5256\u6790\u5f97\u5230\u8a9e\u610f\u89d2\u8272\u95dc\u4fc2\u9032\u800c\u5075\u6e2c\u53e5\u5b50\u4e3b\u8a5e\u8207\u53d7\u8a5e \u95dc\u4fc2\u4ee5\u53ca\u9023\u63a5\u8a5e\u4e5f\u9032\u884c\u5075\u6e2c\u662f\u5426\u70ba\u524d\u5f8c\u8abf\u63db\u8a9e\u610f\u4e0d\u8b8a\u7684\u8a5e\u5982\u8207\u3001\u4e00\u8d77\u7b49\u7b49\u3002 4.8 \u4e00\u8a5e\u591a\u7fa9\u8207\u547d\u540d\u5be6\u9ad4 \u5169\u500b\u53e5\u5b50\u53ef\u80fd\u53ea\u5dee\u7570\u4e00\u500b\u5b57\u8a5e\u5c31\u5c0e\u81f4\u5169\u500b\u53e5\u5b50\u7684\u860a\u6db5\u95dc\u4fc2\u6539\u8b8a\uff0c\u5982\u4f8b\u5b50 \u5169\u500b\u53e5\u5b50\u7684\u4e3b/\u53d7\u8a5e\u4e0d\u540c\u4f7f\u5f97\u5169\u500b\u53e5\u5b50\u7684\u860a\u6db5\u8b8a\u6210\u4e92\u76f8\u7368\u7acb\u3002 \u540c\u7fa9\u8a5e \u6642\u9593\u8cc7\u8a0a\u4e0d\u4e00\u81f4 43(5.28%) \u6b63\u5f0f\u8a55\u6e2c 52.38% 58.33% 52.25% 47.59% 60(7.68%) \u6578\u5b57\u8cc7\u8a0a\u4e0d\u4e00\u81f4 42(5.15%) \u500b\u5225\u7279\u6b8a\u8655\u7406 71.42% 70% 71.25% 60.97% 83(10.62%) T1 \u91d1\u6cf3\u4e09\u5728\u4e00\u4e5d\u4e5d\u4e8c\u5e74\u7576\u9078\u97d3\u570b\u7b2c\u5341\u56db\u5c46\u5927\u7d71\u9818\u3002 T2 \u91d1\u6cf3\u4e09\u5728\u4e00\u4e5d\u4e5d\u4e8c\u5e74\u7576\u9078\u97d3\u570b\u7b2c\u5341\u56db\u5c46\u7e3d\u7d71\u3002 \u5dee\u7570\u4e00\u8a5e 73(8.9%) \u5728\u8868 10 \u7684\u5be6\u9a57\u7d50\u679c\u986f\u793a\uff0c\u7279\u6b8a\u985e\u578b\u5b50\u7cfb\u7d71\u6709\u52a9\u65bc\u63d0\u5347\u5169\u985e(BC)\u7cfb\u7d71\u6548\u679c\uff0c\u56e0\u6b64\u6211\u5011\u8a8d\u70ba 82(10.4%) \u8655\u7406\u591a\u985e(MC)\u6642\u4e5f\u53ef\u4ee5\u63d0\u9ad8\u7cfb\u7d71\u6548\u80fd\uff0c\u63a5\u8457\u6211\u5011\u5c07\u7279\u6b8a\u985e\u578b\u8655\u7406\u7684\u65b9\u6cd5\u52a0\u5165\u7cfb\u7d71\u4e4b\u4e2d\u505a \u540c\u7fa9\u8a5e 53(6.5%) 43(5.5%) \u5be6\u9a57\uff0c\u6211\u5011\u9078\u64c7\u5728\u6b63\u5f0f\u8a55\u6e2c\u4e2d\u6548\u679c\u6700\u597d\u7684 CYUT-03 \u505a\u6e2c\u8a66\uff0c\u5be6\u9a57\u7d50\u679c\u5982\u8868 11 \u6240\u793a\u3002 \u5169\u53e5\u4e2d\u5b57\u8a5e\u96d6\u7136\u5b57\u9762\u4e0a\u6709\u6240\u4e0d\u540c\u4f46\u5728\u8a9e\u610f\u4e0a\u662f\u76f8\u540c\u7684\uff0c\u56e0\u6b64\u9019\u662f\u96d9\u5411\u860a \u6db5\u95dc\u4fc2\u3002 \u80cc\u666f\u77e5\u8b58 5(0.6%) 11(1.4%) \u8868 11. \u52a0\u5165\u7279\u6b8a\u985e\u578b\u65b9\u6cd5\u7c21\u9ad4\u4e2d\u6587\u5be6\u9a57\u7d50\u679c \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u8cc7\u6599\u6642\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u96d6\u7136\u53e5\u5b50\u7684\u5b57\u8a5e\u5f88\u76f8\u4f3c\uff0c\u4f46\u5176\u5be6\u542b\u610f\u5927\u5927\u4e0d\u76f8\u540c\u4f8b\u5982\" T1 2005 \u5e74\u5168\u7403\u6050\u6016\u653b\u64ca\u6d3b\u52d5\u4e0d\u65b7\u662f\u7b2c\u4e09\u4e16\u754c\u5411\u6b50\u7f8e\u9738\u6b0a\u5ba3\u6230\u3002 \u53e5\u6cd5\u8abf\u63db 67(8.2%) 52(6.6%) \u9805\u76ee BC (%) MC (%) \u6c92\u5b8c\u6c92\u4e86\"\u672c\u8eab\u662f\u4e00\u500b\u6210\u8a9e\u4f46\u662f\u9019\u500b\u8a5e\u5728\u5176\u4ed6\u5730\u65b9\u6709\u8457\u5176\u4ed6\u610f\u601d\u5982\u6b4c\u540d\u3001\u96fb\u5f71\u540d\uff0c\u6b63\u56e0 \u5982\u6b64\u9019\u6a23\u5b57\u8a5e\u9020\u6210\u53e5\u5b50\u6574\u500b\u542b\u610f\u5927\u5927\u4e0d\u540c\u5176\u860a\u6db5\u95dc\u4fc2\u4e5f\u8b8a\u6210\u7368\u7acb\u95dc\u4fc2\u4e86\u3002 T2 2005 \u5e74\u5168\u7403\u6050\u6016\u653b\u64ca\u6d3b\u52d5\u4e0d\u65b7\u662f\u56de\u6559\u4e16\u754c\u5411\u6b50\u7f8e\u9738\u6b0a\u5ba3\u6230\u3002 \u4e00\u8a5e\u591a\u7fa9\u8207\u547d\u540d\u5be6\u9ad4 115(14.1%) 91(11.6%) Cyut 67.86 40.37 \u80cc\u666f\u77e5\u8b58 \u5169\u53e5\u4e2d\u5b57\u8a5e\u96d6\u7136\u5b57\u9762\u4e0a\u6709\u6240\u4e0d\u540c\u4f46\u5728\u6709\u76f8\u7576\u80cc\u666f\u77e5\u8b58\u7684\u4eba\u770b\u4f86\u5169\u53e5\u8868\u9054 \u53e5\u5b50\u7e2e\u6e1b 290(35.6%) 159(20.3%) Cyut+case1 68.88 42.08 \u9019\u7a2e\u985e\u578b\u7684\u53e5\u5b50\u5728\u4e4b\u524d\u7cfb\u7d71\u524d\u8655\u7406\u90e8\u5206\u6703\u5c0d\u53e5\u5b50\u4e2d\u7684\u6642\u9593\u683c\u5f0f\u505a\u6b63\u898f\u5316\uff0c\u518d\u5c0d\u53e5\u5b50\u7684\u6642 4.9 \u53e5\u5b50\u7e2e\u6e1b \u7684\u610f\u601d\u662f\u76f8\u540c\uff0c\u56e0\u6b64\u9019\u662f\u96d9\u5411\u860a\u6db5\u95dc\u4fc2\u3002 \u908f\u8f2f\u63a8\u7406 111(13.6%) 86(11%) Cyut+case1+case2 69.78 43.45 \u9593\u9032\u884c\u6bd4\u5c0d\u53ef\u4ee5\u8655\u7406\u7279\u5b9a\u6642\u9593\u9ede\u7684\u53e5\u5b50\u3002 \u662f\"\u82f1\u570b\"\u7684\u610f\u601d\uff0c\u56e0\u6b64\u5c07\u6b63\u5411\u8aa4\u5224\u6210\u7368\u7acb\u95dc\u4fc2\uff0c\u9664\u4e86\u4e0a\u9762\u63d0\u5230\u7684\u540c\u7fa9\u8a5e\u554f\u984c\uff0cCaes5 \u4e2d\u53ef\u4ee5\u767c\u73fe\u53cd\u7fa9\u8a5e\u4e5f\u662f\u672a\u4f86\u9700\u8981\u89e3\u6c7a\u7684\u554f\u984c\u4e4b\u4e00\u3002 \u6211\u5011\u89c0\u5bdf\u8a13\u7df4\u8cc7\u6599\u6642\u767c\u73fe\u6709\u4e9b\u53e5\u5b50\u5c07\u90e8\u4efd\u8cc7\u8a0a\u53bb\u9664\u6389\u5c31\u5f62\u6210\u53e6\u4e00\u500b\u53e5\u5b50\uff0c\u9020\u6210\u67d0\u500b\u53e5\u5b50 \u5305\u6db5\u8457\u53e6\u4e00\u500b\u53e5\u5b50\u6240\u6709\u7684\u8cc7\u8a0a\uff0c\u9019\u6a23\u5169\u500b\u53e5\u5b50\u5c31\u6709\u8457\u65b9\u5411\u6027\u7684\u860a\u6db5\u95dc\u4fc2\uff0c\u5c0d\u53e5\u5b50\u9032\u884c\u5256 T1 \u677e\u82b1\u6c5f\u6c59\u67d3\u4e8b\u4ef6\u5c0e\u81f4\u4fc4\u7f85\u65af\u5c0d\u5927\u9678\u6c11\u773e\u53cd\u611f\u65e5\u589e \u7c97\u7565\u6db5\u84cb 99.75% 90.47% Cyut+case1+case2+case3 71.63 45.13 \u53e5\u6cd5\u8abf\u63db T2 \u677e\u82b1\u6c5f\u6c59\u67d3\u4e8b\u4ef6\u5c0e\u81f4\u5927\u9678\u5c0d\u6c11\u773e\u4fc4\u7f85\u65af\u53cd\u611f\u65e5\u589e Cyut+case1+case2+case3+case4 72.92 45.92
", "html": null, "type_str": "table", "text": "\u7576\u521d\u662f\u88ab\u8a2d\u8a08 \u4f86\u6e2c\u91cf\u6a5f\u5668\u7ffb\u8b6f(machine translation)\u7684\u54c1\u8cea\u3002\u4e00\u500b\u826f\u597d\u7684\u6a5f\u5668\u7ffb\u8b6f\u9700\u8981\u5305\u542b\u9069\u7576\u3001\u6e96\u78ba\u4ee5 \u53ca\u6d41\u66a2\u7684\u7ffb\u8b6f\uff0c\u6211\u5011\u7684\u7cfb\u7d71\u6703\u5c07\u5176\u7ffb\u8b6f\u70ba\u539f\u4f86\u7684\u6587\u5b57 T1 \u548c T2 \u5f97\u5230 log Bleu recall\u3001log Bleu precision \u548c log Bleu F measure values\u3002 \u7b2c\u4e03\u5230\u7b2c\u5341\u9019\u56db\u500b\u7279\u5fb5\u662f T1 \u548c T2 \u7684\u53e5\u5b50\u9577\u5ea6\u3002\u6211\u5011\u7684\u7cfb\u7d71\u6839\u64da\u5b57\u5143\u548c\u5b57\u8a5e\u8a08\u7b97 T1 \u548c T2 \u7684\u53e5\u5b50\u4e2d\u9577\u5ea6\u7684\u5dee\u7570\uff0c\u4e26\u4f7f\u7528\u4e86\u9019\u5169\u500b\u7279\u5fb5\u7684\u7d55\u5c0d\u503c\u5728\u6211\u5011\u7684\u7cfb\u7d71\u4e2d\u3002 \u6700\u5f8c\u7279\u5fb5\u662f\u7531 GIZA++(Och & Ney, 2003 )\u5b57\u8a5e\u5c0d\u9f4a\u5206\u6578\uff0c\u9019\u662f T1 \u53e5\u5b50\u4ee5\u55ae\u4e00\u8a9e\u8a00\u6a5f \u5668\u7ffb\u8b6f\u5230 T2 \u53e5\u5b50\u7684\u6a5f\u7387\u3002\u6a5f\u5668\u7ffb\u8b6f\u53ef\u8a72\u529f\u80fd\u6709\u52a9\u65bc RTE (Quang et al., 2012)\uff0c\u6211\u5011\u7684\u7cfb \u7d71\u63a1\u7528\u7684\u55ae\u4e00\u8a9e\u8a00\u6a5f\u5668\u7ffb\u8b6f\u4f5c\u70ba\u4e00\u500b\u7279\u5fb5\u3002\u5728\u6211\u5011\u7684\u7cfb\u7d71\u4e2d\uff0c\u6211\u5011\u4f7f\u7528 GIZA ++\u505a\u70ba\u55ae \u4e00\u8a9e\u8a00\u6a5f\u5668\u7ffb\u8b6f\u5de5\u5177\uff0cGIZA++\u6839\u64da IBM \u6a21\u578b\u88fd\u4f5c\u800c\u6210\uff0c\u6211\u5011\u7d93\u7531 GIZA++ \u8a08\u7b97\u51fa\u5169\u500b \u860a\u6db5\u53e5\u578b\u5206\u6790\u65bc\u6539\u9032\u4e2d\u6587\u6587\u5b57\u860a\u6db5\u8b58\u5225\u7cfb\u7d71 7" }, "TABREF3": { "num": null, "content": "
Replaced\u6c23\u4efd
character
Translations\u6c7d\u4efd\u6ce3\u4efd\u6c23\u5206\u6c23\u5fff
\u5668\u4efd\u5951\u4efd\u6c23\u61a4\u6c23\u7cde
\u4f01\u4efd\u61a9\u4efd\u6c23\u596e\u6c23\u5429
\u8a16\u4efd\u6c2e\u4efd\u6c23\u626e\u6c23\u6c7e
\u8fc4\u4efd\u7ca5\u4efd
", "html": null, "type_str": "table", "text": "" }, "TABREF6": { "num": null, "content": "
N-gramSinica Corpus TypesTWWaC Types
2-gram66,7782,
3-gram45,382
4-gram12,294
", "html": null, "type_str": "table", "text": "" }, "TABREF7": { "num": null, "content": "
SystemPrecisionRecallF-score
DICT.91.52.66
CORPUS.90.46.61
WEB.93.47.63
WEB+DICT.95.56.71
SystemPrecisionRecallF-score
FULL+WT.53.51.52
SND+WT.74.57.65
SND+SHP.90.55.68
SND+SHP+WT.95.56
", "html": null, "type_str": "table", "text": "" }, "TABREF11": { "num": null, "content": "
F-ScorePrecisionRecall
LRN0.430.710.31
LRN-BO0.450.680.33
Table 2. Average error rate of BNC and BNC-BO
Error Rate
BNC0.10
BNC-BO0.13
", "html": null, "type_str": "table", "text": "" }, "TABREF12": { "num": null, "content": "
For
example:
", "html": null, "type_str": "table", "text": "\u9eb5\u5305\u5200 mianbao-dao 'bread knife' \u885b\u661f\u57ce\u5e02 weixin-chengshi 'satellite city' \u91d1\u878d\u80a1 jinrong-gu 'stocks in the financial sector' \u79cb\u87f9 qiu-xie 'autumn crab' Institute of Information Science, Academia Sinica, Taipei, Taiwan E-mail: {yschung, kchen}@iis.sinica.edu.tw \u8173\u8e0f\u8eca\u8f2a\u80ce jiao-ta-che luntai 'bicycle tire' \u5375\u77f3\u5730\u677f luan-shi diban 'pebble floor' \u9418\u9336 zhong-biao 'clock and watch' \u9435\u684c tie-zhuo 'iron table/desk' \u8eca\u901f che-su 'car speed'" }, "TABREF13": { "num": null, "content": "
\u6c23\u8cea guizu-qizhi)
MAKE: e.g. honeybee (\u871c\u8702 mi-fong), daisy chains (\u96db\u83ca\u934a chuju-lian)
USE: e.g. steam iron (\u84b8\u6c23\u96fb\u71a8\u6597 zhengqi dian-yundou), solar generator (\u592a\u967d\u80fd\u767c\u96fb\u6a5f
taiyang-neng fadian-ji)
", "html": null, "type_str": "table", "text": "regards all N1s and N2s as subjects and objects of nine linking predicates, with one component entity doing something to the other. Below are her examples 1 and their Chinese equivalents: CAUSE: e.g. malarial mosquitoes (\u7627\u868a nue-wen) HAVE: e.g. picture book (\u5716\u756b\u66f8 tuhua-shu), apple cake (\u860b\u679c\u86cb\u7cd5 pingguo dangao), gunboat (\u7832\u8247 pao-ting), industrial area (\u5de5\u696d\u5340 gongye-qu), imperial bearing (\u8cb4\u65cf" }, "TABREF14": { "num": null, "content": "
(a) Temporal N1
N1 denotes time:
e.g. \u6668\u9727 chen-wu 'morning mist' (value+host), \u79cb\u87f9 qiu-xie 'autumn crab'
(value+host), \u5348\u591c\u5217\u8eca wuyie-lieche 'midnight train' (value+host)
N1 denotes frequency:
e.g. \u6708\u8cbb yue-fei 'monthly fee' (value+host)
(b) Locational N1
che-su 'car speed' (host+attribute)
(2)Meronymy (i.e. part-whole relation)
N1 denotes part; N2 denotes whole:
e.g. \u96d9\u5e95\u8239 shuang-di chuan 'double-bottom,'
N1 denotes whole; N2 denotes part:
e.g. \u8173\u8e0f\u8eca\u8f2a\u80ce jiao-ta-che luntai 'bicycle tire,' \u8178\u9053 chang-dao 'intestine canal'
(3)Conjunction
e.g.
", "html": null, "type_str": "table", "text": "\u624b\u8173 shou-jiao 'hands and feet,' \u9418\u9336 zhong-biao 'clock and watch,' \u8b66\u6c11 jing-min 'the police and the people'" }, "TABREF15": { "num": null, "content": "
The relevant FE(s) of N1
\uf06cAs realized by VEHICLE (entity frame): Part
Most frequent semantic relation: simple_host-attribute-value \uf06c (Not realized in event frames)
The relevant FE(s) of N1
\uf06cAs realized in CLOTHING (entity frame): Material
\uf06c(Not realized in event frames)
Complex
The relevant FE(s) of N1
\uf06cAs realized in VEHICLE (entity frame): Means-of-propulsion
\uf06c(Not realized in event frames)
e.g. Simple_meronymy
", "html": null, "type_str": "table", "text": "(The frame names have all capital letters, while FEs only have initial capital letters.) CLOTHING (\u8863 yi 'clothes,' \u670d fu 'clothes,' \u88dd zhuang 'clothes,' \u5e3d mao 'hat,' \u978b xie 'shoes,' etc.) \u96fb\u8eca dian-che 'trolley bus,' \u4eba\u529b\u8eca ren-li-che 'rickshaw'" }, "TABREF16": { "num": null, "content": "
CategoryCoverage
Road40/40 (100%)
Text121/121 (100%)
People241/243 (99.2%)
People of Different Vocations46/48 (95.8%)
Wealth72/72 (100%)
Container411/427 (96.3%)
Food60/86 (69.8%)
Clothing42/47(89.3%)
Vehicle53/69 (76.8%)
Mean1086/1153 (94.2%)
", "html": null, "type_str": "table", "text": "" }, "TABREF17": { "num": null, "content": "
CategoryCoverageTop3Top5
Road67.5%92.5%
Text100%100%
People86.8%94.2%
People of Different Vocations83.3%89.5%
Wealth100%100%
Container94.7%96.3%
Food69.8%69.8%
Clothing72.2%89.2%
Vehicle49.2%65.1%
Mean80.4%88.5%
", "html": null, "type_str": "table", "text": "" }, "TABREF19": { "num": null, "content": "
StateState 2State 3State 4State 5State 6
Number of nodes11465093666261304
", "html": null, "type_str": "table", "text": "" }, "TABREF20": { "num": null, "content": "
StateState 2State 3State 4State 5State 6
Number of nodes244849938604209
", "html": null, "type_str": "table", "text": "" }, "TABREF21": { "num": null, "content": "
SongsNursery rhymes (children's songs) Total 148 songs
SingerOne female
Pitch rangeC4~B4
Version2
Total timeAbout 102 minutes
Sample rate48 kHz
Resolution16 bits
ChannelsMono
", "html": null, "type_str": "table", "text": "" }, "TABREF22": { "num": null, "content": "
ModelDescription
BaselineAll question set
QMQuestion set modification
PSPitch shift pseudo data
QM+PSQuestion set modification and pitch shift pseudo data
", "html": null, "type_str": "table", "text": "" }, "TABREF29": { "num": null, "content": "
\u57fa\u65bc\u97f3\u6bb5\u5f0fLMR\u5c0d\u6620\u4e4b\u8a9e\u97f3\u8f49\u63db\u65b9\u6cd5\u7684\u6539\u9032101
DCC\u97f3\u6bb5PCA \u4fc2CDF \u4fc2LMRCDF \u53cdPCA \u53cd
\uf92d\u6e90 \u8a9e\u97f3\u4f30\u8a08 \u57fa\u983b\u5075\u6e2c \u5716 3. \u57fa\u65bc LMR \u5c0d\u6620\u53ca\u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u8a9e\u97f3\u8f49\u63db\u6d41\u7a0b \uf969\u8f49\u63db \uf969\u8f49\u63db \u5c0d\u6620 \u8f49\u63db \u8f49\u63db \u57fa\u983bHNM \u8a9e\u97f3 \u518d\u5408\u6210\u8f49\u63db \u8a9e\u97f3
\u5075\u6e2c \u9664\u4e86\u5206\u5225\u53bb\u52a0\u5165\u76f4\u65b9\u5716\u7b49\u5316\u548c\u76ee\u6a19\u97f3\u6846\u6311\u9078\u7684\u8655\u7406\u52d5\u4f5c\uff0c\u6211\u5011\u4e5f\u8003\u616e\u4e86\u53e6\u5916\u4e00\u7a2e\u8655 \u8f49\u63db
\u7406\u6d41\u7a0b\uff0c\u5c31\u662f\u540c\u6642\u628a\u9019\u5169\u7a2e\u8655\u7406\u52d5\u4f5c\u52a0\u5165\u5716 1 \u7684\u8655\u7406\u6d41\u7a0b\u4e2d\uff0c\u5982\u6b64\u8f49\u63db\u51fa\u7684\u8a9e\u97f3\u662f\u5426\u53ef
\u4ee5\u7372\u5f97\u6700\u597d\u7684\u97f3\u8272\u76f8\u4f3c\u5ea6\u53ca\u8a9e\u97f3\u54c1\u8cea\uff1f\u9019\u5c07\u6703\u7b2c\u56db\u7bc0\u4e2d\u4f5c\u5be6\u9a57\u63a2\u8a0e\u3002\u6b64\u5916\uff0c\u5728\u5716 1\u30012\u3001
3 \u88e1\u90fd\u51fa\u73fe\u7684 DCC \u4f30\u8a08\u4e4b\u65b9\u584a\uff0c\u8868\u793a\u6211\u5011\u63a1\u7528\u96e2\u6563\u5012\u983b\u8b5c\u4fc2\u6578(DCC)(Capp\u00e9 & Moulines,
1996; Gu & Tsai, 2009)\u4f5c\u70ba\u983b\u8b5c\u7279\u5fb5\u53c3\u6578\uff0c\u4e26\u4e14\u968e\u6578\u8a2d\u70ba 40 \u968e\uff0c\u5373\u4e00\u500b\u97f3\u6846\u8981\u8a08\u7b97\u51fa c 0 ,
c 1 , c 2 , \u2026, c 40 \u7b49 41 \u500b\u4fc2\u6578\uff0c\u4f46\u662f\u53ea\u62ff c 1 , c 2 , \u2026, c 40 \u53bb\u4f5c\u983b\u8b5c\u8f49\u63db\u7684\u8655\u7406\u3002\u7576\u8f49\u63db\u51fa\u5404\u500b
\u97f3\u6846\u7684 DCC \u4fc2\u6578\u4e4b\u5f8c\uff0c\u6211\u5011\u5c31\u53ef\u4f9d\u64da\u5404\u97f3\u6846\u7684 DCC \u4fc2\u6578\u53bb\u8a08\u7b97\u51fa\u983b\u8b5c\u5305\u7d61(Capp\u00e9 &
Moulines, 1996; Gu & Tsai, 2009)\uff0c\u7136\u5f8c\u518d\u4f9d\u64da\u983b\u8b5c\u5305\u7d61\u3001\u8f49\u63db\u51fa\u7684\u57fa\u983b\u503c\uff0c\u53bb\u8a2d\u5b9a\u8a72\u97f3
\u6846\u7684 HNM \u6a21\u578b\u4e4b\u8ae7\u6ce2\u53c3\u6578\u548c\u96dc\u97f3\u53c3\u6578
LMR \u5c0d\u6620\u5f97\u5230\u7684\u983b\u8b5c\u4fc2\u6578\u53bb\u4f5c\u8a9e\u97f3\u518d\u5408\u6210\u8655\u7406\uff0c\u800c\u8981\u6539\u8b8a
\u6210\u4f9d\u64da\u4f86\u6e90\u97f3\u6846(\u4f86\u6e90\u8a9e\u8005\u97f3\u6846)\u7684\u97f3\u6bb5\u985e\u5225\u3001\u53ca\u5c0d\u6620\u51fa\u7684\u983b\u8b5c\u7279\u5fb5\u4fc2\u6578(\u5982 DCC)\uff0c\u53bb\u5c0d
\u540c\u4e00\u97f3\u6bb5\u985e\u5225\u7684\u76ee\u6a19\u97f3\u6846(\u76ee\u6a19\u8a9e\u8005\u97f3\u6846)\u7fa4\u4f5c\u641c\u5c0b\uff0c\u4ee5\u627e\u51fa\u983b\u8b5c\u7279\u5fb5\u5f88\u76f8\u4f3c(\u6216\u8ddd\u96e2\u5f88\u5c0f)
\u7684\u76ee\u6a19\u97f3\u6846\uff0c\u7136\u5f8c\u628a\u627e\u51fa\u7684\u76ee\u6a19\u97f3\u6846\u7684\u983b\u8b5c\u4fc2\u6578\u62ff\u53bb\u53d6\u4ee3\u5c0d\u6620\u51fa\u7684\u983b\u8b5c\u4fc2\u6578\uff0c\u5982\u6b64\u5c31\u53ef
\u514d\u9664\u767c\u751f\u983b\u8b5c\u5305\u7d61\u904e\u5ea6\u5e73\u6ed1\u7684\u554f\u984c\u3002\u7531\u65bc\u88ab\u627e\u51fa\u7684\u76ee\u6a19\u97f3\u6846\u4e0d\u662f\u7d93\u7531\u983b\u8b5c\u5c0d\u6620\u800c\u5f97\u5230\uff0c
\u6240\u4ee5\u5728\u6b64\u4e5f\u7a31\u5b83\u70ba\u771f\u5be6\u97f3\u6846(\u771f\u5be6\u8a9e\u97f3\u7684\u97f3\u6846)\uff0c\u6b64\u5916\uff0c\u76ee\u6a19\u97f3\u6846\u7684\u97f3\u6bb5\u5206\u985e\u8207\u6536\u96c6\u662f\u5728
\u8a13\u7df4\u968e\u6bb5\u9032\u884c\uff0c\u6240\u4ee5\u8f49\u63db\u968e\u6bb5\u5c31\u53ef\u76f4\u63a5\u53bb\u4f5c\u641c\u5c0b\u8207\u6311\u9078\u3002\u7576\u5716 1 \u63d2\u5165\"\u76ee\u6a19\u97f3\u6846\u6311\u9078\"
\u7684\u65b9\u584a\u4e4b\u5f8c\uff0c\u4e00\u7a2e\u57fa\u65bc LMR \u5c0d\u6620\u53ca\u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u6539\u9032\u7684\u8a9e\u97f3\u8f49\u63db\u8655\u7406\u6d41\u7a0b\u5c31\u5982\u5716 3
\u6240\u793a\u3002
", "html": null, "type_str": "table", "text": "\u548c Erro \u7b49\u4eba\u7684\u65b9\u6cd5(Erro et al., 2010)\u90fd\u662f\u91dd\u5c0d GMM \u5c0d\u6620\u6240\u8a2d \u8a08\u7684\uff0c\u800c Godoy \u7b49\u4eba\u7684\u65b9\u6cd5(Godoy et al., 2012)\u5247\u4e0d\u662f\u91dd\u5c0d GMM \u5c0d\u6620\u6216 LMR \u5c0d\u6620\u6240\u8a2d \u8a08\u7684\u3002\u56e0\u6b64\u6211\u5011\u5c31\u5f9e\u53e6\u5916\u4e00\u500b\u65b9\u5411\u53bb\u601d\u8003\u5716 1 \u6d41\u7a0b\u7684\u6539\u9032\u4f5c\u6cd5\uff0c\u5728\u53c3\u8003 Dutoit \u7b49\u4eba\u7684\u8ad6 \u6587(Dutoit et al., 2007)\u4e4b\u5f8c\uff0c\u6211\u5011\u60f3\u5230\u7684\u4e00\u500b\u4f5c\u6cd5\u662f\uff0c\u5728\u5716 1\"LMR \u5c0d\u6620\"\u65b9\u584a\u4e4b\u5f8c\u63d2\u5165\" \u76ee\u6a19\u97f3\u6846\u6311\u9078\"\u7684\u65b9\u584a\u3002\u65e2\u7136\u7d93\u904e GMM \u6216 LMR \u5c0d\u6620\u5f97\u5230\u7684\u983b\u8b5c\u5305\u7d61\u6703\u767c\u751f\u904e\u5ea6\u5e73\u6ed1 \u7684\u73fe\u8c61\uff0c\u90a3\u9ebc\u5c31\u4e0d\u8981\u76f4\u63a5\u62ff" }, "TABREF30": { "num": null, "content": "
\u53e4\u9d3b\u708e\u3001\u5f35\u5bb6\u7dad
(Hotelling, 1933)\u3002PCA \u8f49\u63db\u662f\u4e00\u7a2e\u6b63\u4ea4\u8b8a\u63db\uff0c\u5b83\u53ef\u4ee5\u5c07\u539f\u672c\u7dad\u5ea6\u9593\u76f8\u95dc\u7684\u539f\u59cb\u6578\u64da\u8f49\u63db
DCC \u6210\u5404\u7dad\u5ea6\u7368\u7acb\u7684\u65b0\u6578\u64da\uff0c\u518d\u8005\u4f5c PCA \u8f49\u63db\u5f8c\u7684\u65b0\u6578\u64da\uff0c\u5b83\u5011\u7684\u7e3d\u8b8a\u7570\u6578(variance)\u8207\u539f\u59cb \u97f3\u6bb5 \u97f3\u6bb5\u5f0f \u76ee\u6a19\u97f3
\uf92d\u6e90 \u6578\u64da\u96c6\u7684\u7e3d\u8b8a\u7570\u6578\u76f8\u7b49\uff0c\u4e5f\u5c31\u662f\u8aaa PCA \u8f49\u63db\u80fd\u4fdd\u7559\u539f\u59cb\u6578\u64da\u7684\u8a0a\u606f\u3002 \u4f30\u8a08 \u5075\u6e2c LMR \u5c0d\u6620 HNM \u8a9e\u97f3 \u6846\u6311\u9078\u8f49\u63db
\u8a9e\u97f3 2.1.1 \u4e3b\u6210\u5206\u5206\u6790 \u57fa\u983b\u57fa\u983b\u518d\u5408\u6210\u8a9e\u97f3
\u5075\u6e2c \u5c0d\u65bc\u67d0\u4e00\u97f3\u6bb5\u985e\u5225\u7684\u6240\u6709\u8a13\u7df4\u8a9e\u97f3\u4f5c\u97f3\u6846\u5207\u5272\u53ca\u6c42\u53d6 DCC \u4fc2\u6578\uff0c\u4ee5\u5efa\u7acb\u4e00\u500b 40 \u7dad DCC \u8f49\u63db
\u4fc2\u6578\u7684\u6578\u64da\u96c6\uff0c\u63a5\u8457\u518d\u5c0d\u9019\u500b\u6578\u64da\u96c6\u4f5c PCA \u5206\u6790\u4ee5\u5f97\u5230\u8a72\u7a2e\u97f3\u6bb5\u7684\u4e3b\u6210\u5206\u5411\u91cf\uff0c\u8a73\u7d30\u7684
\u5206\u6790\u6d41\u7a0b\u5982\u4e0b:
\u3002
2. PCA\u4fc2\u6578\u8f49\u63db\u8207\u76f4\u65b9\u5716\u7b49\u5316
\u82e5\u8981\u4f9d\u64da\u5716 2 2.1 PCA\u4fc2\u6578\u8f49\u63db
", "html": null, "type_str": "table", "text": "\u7684\u8655\u7406\u6d41\u7a0b\u4f86\u9032\u884c\u8a9e\u97f3\u8f49\u63db\u7684\u8655\u7406\uff0c\u5247\u5404\u97f3\u6846\u5728\u6c42\u53d6 DCC \u4fc2\u6578\u4e4b\u5f8c\uff0c\u63a5\u8457 \u5c31\u8981\u4f5c PCA \u4fc2\u6578\u8f49\u63db\u548c CDF \u4fc2\u6578\u8f49\u63db\u7684\u52d5\u4f5c\uff0c\u7136\u5f8c\u5728 LMR \u5c0d\u6620\u4e4b\u5f8c\uff0c\u9084\u8981\u4f5c PCA \u53cd \u8f49\u63db\u548c CDF \u53cd\u8f49\u63db\u7684\u52d5\u4f5c\uff0c\u4ee5\u5c07\u983b\u8b5c\u7279\u5fb5\u9084\u539f\u6210 DCC \u4fc2\u6578\u3002\u56e0\u6b64\uff0c\u5728\u9019\u4e00\u7bc0\u5c31\u8aaa\u660e PCA \u4fc2\u6578\u8f49\u63db\u548c CDF \u4fc2\u6578\u8f49\u63db\u7684\u7d30\u7bc0\u3002 \u8981\u80fd\u5920\u628a\u4e00\u500b\u4f86\u6e90\u97f3\u6846\u7684 DCC \u4fc2\u6578\u8f49\u63db\u6210 PCA \u4fc2\u6578\uff0c\u5247\u5728\u8a13\u7df4\u968e\u6bb5\u5c31\u8981\u5148\u5c0d\u4f86\u6e90\u8a9e\u8005 \u5404\u500b\u97f3\u6bb5\u985e\u5225\u6240\u6536\u96c6\u5230\u7684 DCC \u5411\u91cf\u4f5c PCA \u5206\u6790\uff0c\u4ee5\u6c42\u53d6\u4f86\u6e90\u8a9e\u8005\u5404\u500b\u97f3\u6bb5\u985e\u5225\u7684\u4e3b\u6210 \u5206\u5411\u91cf\u3002\u76f8\u5c0d\u5730\uff0c\u8981\u80fd\u5920\u628a\u4e00\u500b LMR \u5c0d\u6620\u5f8c\u97f3\u6846\u7684 PCA \u4fc2\u6578\u53cd\u8f49\u63db\u6210 DCC \u4fc2\u6578\uff0c\u5247 \u5728\u8a13\u7df4\u968e\u6bb5\u4e5f\u8981\u5148\u5c0d\u76ee\u6a19\u8a9e\u8005\u5404\u500b\u97f3\u6bb5\u985e\u5225\u6240\u6536\u96c6\u5230\u7684 DCC \u5411\u91cf\u4f5c PCA \u5206\u6790\uff0c\u4ee5\u6c42\u53d6 \u76ee\u6a19\u8a9e\u8005\u5404\u500b\u97f3\u6bb5\u985e\u5225\u7684\u4e3b\u6210\u5206\u5411\u91cf\u3002\u7136\u800c\u95dc\u65bc PCA \u5206\u6790\u7684\u4f5c\u6cd5\uff0c\u6211\u5011\u66fe\u7d93\u601d\u7d22\u7684\u4e00\u500b \u7591\u554f\u662f\uff0c\u96d6\u7136\u76f4\u89ba\u4e0a\u6211\u5011\u6703\u8a8d\u70ba\u4f86\u6e90\u97f3\u6846\u548c\u76ee\u6a19\u97f3\u6846\u61c9\u8a72\u8981\u5206\u958b\u53bb\u6536\u96c6\uff0c\u4e26\u4e14\u5206\u958b\u53bb\u4f5c PCA \u5206\u6790\u4ee5\u6c42\u53d6\u5404\u81ea\u7684\u4e3b\u6210\u5206\u5411\u91cf\uff0c\u4f46\u662f\uff0c\u70ba\u4ec0\u9ebc\u4e0d\u80fd\u5920\u628a\u540c\u4e00\u97f3\u6bb5\u985e\u5225\u7684\u4f86\u6e90\u97f3\u6846\u548c \u76ee\u6a19\u97f3\u6846\u653e\u5728\u4e00\u8d77\u53bb\u4f5c PCA \u5206\u6790\uff1f\u53c8\u70ba\u4ec0\u9ebc\u4e0d\u8b93\u4f86\u6e90\u97f3\u6846\u548c\u76ee\u6a19\u97f3\u6846\u5171\u7528\u4e00\u7d44\u4e3b\u6210\u5206 \u5411\u91cf\u5462\uff1f\u56e0\u6b64\uff0c\u6211\u5011\u5c07\u4ee5\u5be6\u9a57\u8a55\u4f30\u7684\u65b9\u5f0f\u4f86\u63a2\u8a0e\u6b64\u4e00\u7591\u554f\u3002 PCA \u5206\u6790\u662f\u7531 K. Pearson \u65bc 1901 \u5e74\u63d0\u51fa\uff0c\u5728 1933 \u5e74\u6642\u518d\u7531 H. Hotelling \u52a0\u4ee5\u767c\u5c55" }, "TABREF32": { "num": null, "content": "
)
2.2 \u76f4\u65b9\u5716\u7b49\u5316
\u76f4\u65b9\u5716\u7b49\u5316\u6240\u6307\u7684\u662f\u5716 2 \u6d41\u7a0b\u88e1\"CDF \u4fc2\u6578\u8f49\u63db\" \u8207 \"CDF \u53cd\u8f49\u63db\"\u5169\u65b9\u584a\u7684\u8655\u7406\u3002
\u8981\u80fd\u5920\u628a\u4e00\u500b\u4f86\u6e90\u97f3\u6846\u7684 PCA 2.2.1 HEQ\u8868\u683c\u5efa\u9020
\u9078\u5b9a\u4e00\u500b\u4f86\u6e90(\u6216\u76ee\u6a19)\u8a9e\u8005\u7684\u97f3\u6bb5\u985e\u5225\uff0c\u4ee4\u8a72\u985e\u5225\u88e1\u6536\u96c6\u5230\u7684\u97f3\u6846\u7e3d\u6578\u70ba M\uff0c\u5247\u5c07 M \u500b
\u7dad\u5ea6\u70ba L \u7684 PCA \u4fc2\u6578\u5411\u91cf\u4f5c\u70ba\u8f38\u5165\u8cc7\u6599\uff0c\u4f9d\u7167\u4e0b\u5217\u6b65\u9a5f\u4f86\u5efa\u9020 HEQ \u8868\u683c:
", "html": null, "type_str": "table", "text": "\u4fc2\u6578\u8f49\u63db\u6210 CDF \u4fc2\u6578\uff0c\u5247\u5728\u8a13\u7df4\u968e\u6bb5\u5c31\u8981\u5148\u5c0d\u4f86\u6e90\u8a9e\u8005 \u5404\u500b\u97f3\u6bb5\u985e\u5225\u6240\u6536\u96c6\u5230\u7684 PCA \u5411\u91cf\u4f5c HEQ \u5206\u6790\uff0c\u4ee5\u5efa\u9020\u4f86\u6e90\u8a9e\u8005\u5404\u500b\u97f3\u6bb5\u985e\u5225\u7684 HEQ \u8868\u683c\u3002\u76f8\u5c0d\u5730\uff0c\u8981\u80fd\u5920\u628a\u4e00\u500b LMR \u5c0d\u6620\u5f8c\u97f3\u6846\u7684 CDF \u4fc2\u6578\u53cd\u8f49\u63db\u6210 PCA \u4fc2\u6578\uff0c\u5247\u5728\u8a13 \u7df4\u968e\u6bb5\u4e5f\u8981\u5148\u5c0d\u76ee\u6a19\u8a9e\u8005\u5404\u500b\u97f3\u6bb5\u985e\u5225\u6240\u6536\u96c6\u5230\u7684 PCA \u5411\u91cf\u4f5c HEQ \u5206\u6790\uff0c\u4ee5\u5efa\u9020\u76ee\u6a19 \u8a9e\u8005\u5404\u500b\u97f3\u6bb5\u985e\u5225\u7684 HEQ \u8868\u683c\u3002\u9019\u88e1\u63d0\u5230 HEQ \u8868\u683c\uff0c\u610f\u8b02\u6211\u5011\u63a1\u53d6\u57fa\u672c\u7684\u8868\u683c\u6cd5\u4f86\u5efa \u7acb PCA \u4fc2\u6578\u548c CDF \u4fc2\u6578\u4e4b\u9593\u7684\u76f4\u65b9\u5716\u7b49\u5316\u95dc\u4fc2\u3002" }, "TABREF33": { "num": null, "content": "
\u8868 1.\u4e00\u500b\u7c21\u5316\u7684 HEQ \u8868\u683c\u4f8b\u5b50
\u5340\u9593 j012345
1 j Fp1(min)38131820(max)
1 j Fc0.050.150.40.650.91
2.2.2 CDF\u4fc2\u6578\u8f49\u63db
\u5047\u8a2d\u6709\u4e00\u500b\u97f3\u6846\u7684 PCA \u4fc2\u6578\u5411\u91cf\uf05b P P P 1 2 , , , L P \uf03d \uf04c\uf05d\u8981\u88ab\u8f49\u63db\uff0c\u800c\u8a72\u97f3\u6846\u6240\u5c6c\u7684\u97f3\u6bb5\u985e\u5225
\u8cc7\u8a0a\uff0c\u5df2\u7d93\u5728\u5716 2 \u7684\"\u97f3\u6bb5\u5075\u6e2c\"\u65b9\u584a\u6c7a\u5b9a\u51fa\u4f86\uff0c\u6240\u4ee5\u6211\u5011\u53ef\u4ee5\u53d6\u51fa\u8a72\u97f3\u6bb5\u985e\u5225\u7684\u4f86\u6e90
\u97f3\u6846\u6240\u8a13\u7df4\u51fa\u7684 HEQ \u8868\u683c\uff0c\u7136\u5f8c\u4ee5\u7dda\u6027\u5167\u63d2\u7684\u65b9\u5f0f\u4f86\u8a08\u7b97\u51fa\u8a72\u97f3\u6846\u7684 CDF \u4fc2\u6578\u5411\u91cf
", "html": null, "type_str": "table", "text": "HEQ \u8868\u683c\u5c31\u5efa\u7acb\u5b8c\u6210\u4e86\u3002\u5c0d\u65bc\u5340\u9593 \u6578 N \u7684\u9078\u64c7\uff0c\u6211\u5011\u5728\u8a55\u4f30\u5be6\u9a57\u88e1\u5617\u8a66\u4e86 32, 64, 128 \u7b49\u4e09\u7a2e\u3002HEQ \u8868\u683c\u5efa\u9020\u5f8c\u7684\u5916\u89c0\u70ba\u4f55\uff1f \u5728\u6b64\u8209\u4e00\u500b\u7c21\u5316\u7684\u4f8b\u5b50\uff0c\u8a2d\u6709 20 \u500b\u97f3\u6846\uff0cPCA \u4fc2\u6578\u5411\u91cf\u7dad\u5ea6\u70ba 1 \u7dad\uff0c\u4e14 PCA \u4fc2\u6578\u5e8f\u5217 \u53e4\u9d3b\u708e\u3001\u5f35\u5bb6\u7dad \u6392\u5e8f\u5f8c\u70ba 1, 2, \u2026, 20\uff0c\u82e5\u8a2d\u5b9a\u7684\u5340\u9593\u6578\u70ba N=4\uff0c\u5247\u5efa\u9020\u51fa\u7684 HEQ \u8868\u683c\u5982\u4e0b\u6240\u5217\u3002" }, "TABREF34": { "num": null, "content": "
\u53e4\u9d3b\u708e\u3001\u5f35\u5bb6\u7dad
4.1 \u8a9e\u97f3\u8f49\u63db\u7cfb\u7d71\u4e4b\u8a13\u7df4
\u9996\u5148\uff0c\u6211\u5011\u64cd\u4f5c HTK (HMM tool kit)\u8edf\u9ad4\uff0c\u7d93\u7531\u5f37\u5236\u5c0d\u9f4a(forced alignment)\u4f86\u4f5c\u81ea\u52d5\u6a19\u97f3\uff0c
\u628a\u4e00\u500b\u8a9e\u53e5\u7684\u5404\u500b\u8072\u6bcd\u3001\u97fb\u6bcd\u7684\u908a\u754c\u6a19\u793a\u51fa\u4f86\uff0c\u7136\u5f8c\u64cd\u4f5c WaveSurfer \u8edf\u9ad4\uff0c\u4ee5\u6aa2\u67e5\u81ea\u52d5
\u6a19\u8a18\u7684\u908a\u754c\u662f\u5426\u6709\u932f\uff0c\u6709\u932f\u5247\u4f5c\u4eba\u5de5\u66f4\u6b63\u3002\u63a5\u8457\uff0c\u4f9d\u64da\u5404\u500b\u8072\u3001\u97fb\u6bcd\u7684\u62fc\u97f3\u7b26\u865f\u6a19\u8a18\u548c
\u908a\u754c\u4f4d\u7f6e\uff0c\u5c31\u53ef\u4f5c\u97f3\u6bb5\u5207\u5272\u548c\u5206\u985e\u7684\u52d5\u4f5c\uff0c\u6211\u5011\u4e00\u5171\u5206\u6210 57 \u985e\uff0c\u5373 21 \u985e\u8072\u6bcd\u548c 36 \u985e\u97fb
\u6bcd\u3002
\u5c0d\u65bc\u5404\u500b\u8a9e\u97f3\u97f3\u6846\uff0c\u6211\u5011\u5148\u8a08\u7b97\u96f6\u4ea4\u8d8a\u7387(ZCR)\uff0c\u4ee5\u628a ZCR \u5f88\u9ad8\u7684\u7121\u8072(unvoiced)
\u97f3\u6846\u5075\u6e2c\u51fa\u4f86\uff1b\u518d\u4f7f\u7528\u4e00\u7a2e\u57fa\u65bc\u81ea\u76f8\u95dc\u51fd\u6578\u53ca AMDF \u7684\u57fa\u9031\u5075\u6e2c\u65b9\u6cd5
", "html": null, "type_str": "table", "text": "\u500b\u88ab\u8f49\u63db\u51fa\u7684 DCC \u5411\u91cf\uff0c\u8f49\u63db\u53ef\u4ee5\u662f\u76f4\u63a5\u7d93\u7531\u5716 3 \"LMR \u5c0d\u6620\"\u65b9\u584a\u5f97\u5230\uff0c\u6216\u662f LMR \u5c0d\u6620\u5f8c\u518d\u4f5c CDF \u53cd\u8f49\u63db\u8207 PCA \u53cd\u8f49\u63db\u800c\u5f97\u5230(\u5716 2" }, "TABREF35": { "num": null, "content": "
\u57fa\u65bc\u97f3\u6bb5\u5f0fLMR\u5c0d\u6620\u4e4b\u8a9e\u97f3\u8f49\u63db\u65b9\u6cd5\u7684\u6539\u9032 \u57fa\u65bc\u97f3\u6bb5\u5f0fLMR\u5c0d\u6620\u4e4b\u8a9e\u97f3\u8f49\u63db\u65b9\u6cd5\u7684\u6539\u9032107 \u53e4\u9d3b\u708e\u3001\u5f35\u5bb6\u7dad 109
4.2 \u5171\u7528\u4e3b\u6210\u5206\u5411\u91cf\u4e4b\u6e2c\u8a66 \u5176\u4e2d\u53f3\u908a\u4e09\u6b04\u7684\u6578\u503c\u662f\u53d6\u81ea\u8868 2 \u7684\u53f3\u908a\u4e09\u6b04\u3002 \u8868 4. \u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u5e73\u5747\u8f49\u63db\u8aa4\u5dee
\u5716 2 \u7684\u8655\u7406\u6d41\u7a0b\u88e1\uff0cPCA \u4fc2\u6578\u8f49\u63db\u8207 PCA \u53cd\u8f49\u63db\u5169\u500b\u8655\u7406\u65b9\u584a\uff0c\u82e5\u8b93\u5169\u8005\u5171\u7528\u4e00\u7d44\u4e3b \u6210\u5206\u5411\u91cf\u662f\u5426\u6703\u6bd4\u8f03\u597d\uff1f\u539f\u5148\u4e0d\u5171\u7528\u4e3b\u6210\u5206\u5411\u91cf\u7684\u60c5\u6cc1\uff0c\u8868\u793a\"PCA \u4fc2\u6578\u8f49\u63db\"\u65b9\u584a\u4f7f \u7528\u7684\u4e3b\u6210\u5206\u662f\u7531\u4f86\u6e90\u97f3\u6846\u4f5c\u5b8c\u97f3\u6bb5\u5206\u985e\u5f8c\u518d\u4f5c PCA \u5206\u6790\u5f97\u5230\uff0c\u800c\"PCA \u53cd\u8f49\u63db\"\u65b9\u584a \u8868 3. \u4f5c\u8207\u4e0d\u4f5c PCA \u4fc2\u6578\u8f49\u63db\u4e4b\u5e73\u5747\u8f49\u63db\u8aa4\u5dee \u4e0d\u4f5c PCA \u4fc2\u6578\u8f49\u63db \u4f5c PCA \u4fc2\u6578\u8f49\u63db \u8aa4\u5dee \u57fa\u672c\u578b \u8907\u5408\u578b \u914d\u5c0d \u8aa4\u5dee \u914d\u5c0d 32 \u5340\u9593 64 \u5340\u9593 128 \u5340\u9593 32 \u5340\u9593 64 \u5340\u9593 MA=> MB 0.5990 0.6087 128 \u5340\u9593 \u4f7f\u7528\u7684\u4e3b\u6210\u5206\u5247\u662f\u7531\u76ee\u6a19\u97f3\u6846\u4f5c\u5b8c\u97f3\u6bb5\u5206\u985e\u5f8c\u518d\u4f5c PCA \u5206\u6790\u5f97\u5230\uff1b\u82e5\u662f\u5171\u7528\u4e3b\u6210\u5206\u5411\u91cf\uff0c MA=> MB 0.5454 0.5450 0.5446 0.5389 0.5389 0.5389 MA=> FA 0.5706 0.5791 \u5c31\u8868\u793a\u540c\u4e00\u97f3\u6bb5\u985e\u5225\u7684\u4f86\u6e90\u97f3\u6846\u548c\u76ee\u6a19\u97f3\u6846\u8981\u653e\u5728\u4e00\u8d77\u53bb\u4f5c PCA \u5206\u6790\uff0c\u4ee5\u6c42\u5f97\u5171\u7528\u7684\u4e00 MA=> FA 0.5177 0.5172 0.5171 0.5155 0.5154 0.5154 FA => MA 0.5925 0.6032 \u7d44\u4e3b\u6210\u5206\u5411\u91cf\u3002 \u6211\u5011\u4ee5\u91cf\u6e2c\u8a9e\u97f3\u8f49\u63db\u7684\u5e73\u5747\u8f49\u63db\u8aa4\u5dee\u7684\u65b9\u5f0f\uff0c\u4f86\u6bd4\u8f03\u5171\u7528\u8207\u4e0d\u5171\u7528\u4e3b\u6210\u5206\u5411\u91cf\u4e4b\u512a FA => MA 0.5410 0.5402 0.5399 0.5369 0.5344 0.5344 FA => FB 0.6493 0.6574
\u52a3\u3002\u5728\u6b64\uff0c\u6211\u5011\u53ea\u62ff\u5e73\u884c\u8a9e\u6599\u6700\u5f8c\u7684 25 \u53e5\u4f86\u4f5c\u8a9e\u97f3\u8f49\u63db\u4e4b\u5916\u90e8\u6e2c\u8a66\uff0c\u7576\u4e00\u500b\u4f86\u6e90\u97f3\u6846\u7d93 FA => FB 0.5826 0.5825 0.5823 0.5773 0.5768 0.5768 \u5e73\u5747 0.6029 0.6121
\u904e\u8f49\u63db\u800c\u5f97\u5230 DCC \u5411\u91cf\u4e4b\u5f8c\uff0c\u6211\u5011\u5c31\u53ef\u91cf\u6e2c\u6b64 DCC \u5411\u91cf\u8207\u5c0d\u61c9\u7684\u76ee\u6a19\u97f3\u6846 DCC \u5411\u91cf \u5e73\u5747 0.5467 0.5462 0.5460 0.5422 0.5414 0.5414 \u524d\u8ff0\u7684\u4e0d\u4e00\u81f4\u6027\u60c5\u6cc1\uff0c\u5373\u8aa4\u5dee\u8ddd\u96e2\u8b8a\u5927\u53cd\u800c\u5f97\u5230\u66f4\u597d\u7684\u8a9e\u97f3\u54c1\u8cea\uff0c\u662f\u4ec0\u9ebc\u539f\u56e0\u9020\u6210
\u4e4b\u9593\u7684\u5e7e\u4f55\u8ddd\u96e2\uff0c\u9019\u6a23\u7684\u8ddd\u96e2\u4e5f\u7a31\u70ba\u8f49\u63db\u8aa4\u5dee\uff0c\u7576\u628a\u5168\u90e8\u97f3\u6846\u7684\u8f49\u63db\u8aa4\u5dee\u52a0\u7e3d\u53ca\u53d6\u5e73\u5747\uff0c \u5c31\u53ef\u7b97\u51fa\u5e73\u5747\u7684\u8f49\u63db\u8aa4\u5dee\u3002\u6b64\u5916\uff0c\u6211\u5011\u4e5f\u628a\u5716 2 \u6d41\u7a0b\u88e1\u7684\u76f4\u65b9\u5716\u7b49\u5316(\u5373 CDF \u4fc2\u6578\u8f49\u63db \u5be6\u9a57\u91cf\u6e2c\u5f8c\uff0c\u6211\u5011\u5f97\u5230\u5982\u8868 2 \u6240\u793a\u7684\u5e73\u5747\u8f49\u63db\u8aa4\u5dee\u503c\u3002 \u5716\u7b49\u5316\u4e4b\u524d\u5148\u4f5c PCA \u4fc2\u6578\u8f49\u63db\u662f\u6709\u7528\u7684\u3001\u9700\u8981\u7684\u3002 \u8207\u53cd\u8f49\u63db)\u5206\u6210\u4e09\u7a2e\u60c5\u6cc1\u4f86\u4f5c\u5be6\u9a57\uff0c\u5c31\u662f\u5206\u5225\u8a2d\u5b9a\u5340\u9593\u7684\u6578\u91cf N \u70ba 32\u300164\u3001\u8207 128\uff0c\u7d93\u904e \u7684\uff1f\u70ba\u4e86\u77ad\u89e3\u5176\u539f\u56e0\uff0c\u6211\u5011\u5c31\u627e\u4e00\u4e9b\u76ee\u6a19\u97f3\u6846\u4f86\u89c0\u5bdf\u5b83\u5011\u7684\u983b\u8b5c\u5305\u7d61\u66f2\u7dda\u3002\u5c0d\u65bc\u5404\u500b\u76ee \u5f9e\u8868 3 \u7684\u6578\u503c\u53ef\u4ee5\u770b\u51fa\uff0c\u4f5c PCA \u4fc2\u6578\u8f49\u63db\u7684\u78ba\u53ef\u4f7f\u5f97\u8a9e\u97f3\u8f49\u63db\u7684\u8aa4\u5dee\u5e73\u5747\u503c\u4e0b\u964d\uff0c \u5728 64 \u5340\u9593\u76f4\u65b9\u5716\u7b49\u5316\u7684\u60c5\u6cc1\u4e0b\uff0c\u5e73\u5747\u8f49\u63db\u8aa4\u5dee\u53ef\u5f9e 0.5462 \u964d\u5230 0.5414\uff0c\u9019\u8aaa\u660e\u4e86\u76f4\u65b9 \u6a19\u97f3\u6846\uff0c\u6211\u5011\u628a
4.4 \u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u8f49\u63db\u8aa4\u5dee \u76ee\u6a19\u97f3\u6846\u6311\u9078\u53ef\u7528\u4ee5\u907f\u514d\u767c\u751f\u983b\u8b5c\u904e\u5ea6\u5e73\u6ed1\u7684\u554f\u984c\uff0c\u5176\u8a73\u7d30\u7684\u4f5c\u6cd5\u5df2\u5728\u7b2c\u4e09\u7bc0\u8aaa\u660e\u3002\u5728 \u6b64\u6211\u5011\u4f9d\u64da\u5716 3 \u4e4b\u8655\u7406\u6d41\u7a0b\uff0c\u6e2c\u8a66\u76ee\u6a19\u97f3\u6846\u6311\u9078\u662f\u5426\u53ef\u4ee5\u8b93\u8a9e\u97f3\u8f49\u63db\u7684\u5e73\u5747\u8aa4\u5dee\u6e1b\u5c11\uff1f \u662f\u5426\u53ef\u4ee5\u6bd4\u5716 2 \u8655\u7406\u6d41\u7a0b\u7684\u597d\uff1f\u5716 3 \u6d41\u7a0b\u7684\u8a9e\u97f3\u8f49\u63db\u65b9\u6cd5\uff0c\u6211\u5011\u7a31\u70ba\u57fa\u672c\u578b\u76ee\u6a19\u97f3\u6846\u6311 \u9078\u6cd5\uff0c\u6b64\u5916\uff0c\u6211\u5011\u4e5f\u6e2c\u8a66\u4e86\u53e6\u5916\u4e00\u7a2e\u8a9e\u97f3\u8f49\u63db\u65b9\u6cd5\uff0c\u7a31\u70ba\u8907\u5408\u578b\u76ee\u6a19\u97f3\u6846\u6311\u9078\u6cd5\uff0c\u5c31\u662f \u5f9e\u8868 2 \u8868 2. \u5171\u7528\u8207\u4e0d\u5171\u7528\u4e3b\u6210\u5206\u5411\u91cf\u4e4b\u5e73\u5747\u8f49\u63db\u8aa4\u5dee \u8aa4\u5dee \u4e0d\u5171\u7528 PCA \u5411\u91cf \u5728\u5716 2 \u6d41\u7a0b\u4e2d\"PCA \u53cd\u8f49\u63db\"\u8207\"HNM \u8a9e\u97f3\u518d\u5408\u6210\"\u5169\u65b9\u584a\u4e4b\u9593\u63d2\u5165\"\u76ee\u6a19\u97f3\u6846\u6311\u9078\" \u5171\u7528 PCA \u5411\u91cf \u4e4b\u65b9\u584a\uff0c\u81f3\u65bc\u76f4\u65b9\u5716\u7b49\u5316(CDF \u8f49\u63db\u8207\u53cd\u8f49\u63db)\u6240\u7528\u7684\u5340\u9593\u6578\uff0c\u9019\u88e1\u5c31\u8a2d\u70ba 64\u3002
\u914d\u5c0d32 \u5340\u9593 \u5c0d\u65bc\u524d\u8ff0\u7684\u57fa\u672c\u578b\u8207\u8907\u5408\u578b\u76ee\u6a19\u97f3\u6846\u6311\u9078\u6cd5\uff0c\u6211\u5011\u4f7f\u7528\u7684\u6e2c\u8a66\u8a9e\u6599\u548c\u8aa4\u5dee\u7684\u91cf\u6e2c\u65b9 64 \u5340\u9593 128 \u5340\u9593 32 \u5340\u9593 64 \u5340\u9593 128 \u5340\u9593
MA=> MB \u5f0f\uff0c\u548c 4.2 \u7bc0\u88e1\u6558\u8ff0\u7684\u4e00\u6a23\uff0c\u4ea6\u5373\u4f7f\u7528\u5e73\u884c\u8a9e\u6599\u6700\u5f8c 25 \u53e5\u4f86\u4f5c\u5916\u90e8\u6e2c\u8a66\uff0c\u4e26\u4e14\u91cf\u6e2c\u8f49\u63db 0.5442 0.5438 0.5442 0.5389 0.5389 0.5389
MA=> FA \u5f97\u5230\u7684 DCC \u5411\u91cf\u8207\u5c0d\u61c9\u7684\u76ee\u6a19\u97f3\u6846 DCC \u5411\u91cf\u4e4b\u9593\u7684\u5e7e\u4f55\u8ddd\u96e2\uff0c\u518d\u8a08\u7b97\u5168\u90e8\u97f3\u6846\u7684\u5e73\u5747 0.5159 0.5158 0.5156 0.5155 0.5154 0.5154
FA => MA \u8aa4\u5dee\u3002\u7d93\u904e\u5be6\u9a57\u91cf\u6e2c\u5f8c\uff0c\u6211\u5011\u5f97\u5230\u5982\u8868 4 \u6240\u793a\u7684\u5e73\u5747\u8f49\u63db\u8aa4\u5dee\u503c\uff0c\u7531\u8868 4 \u53ef\u77e5\u57fa\u672c\u578b\u76ee 0.5387 0.5386 0.5384 0.5369 0.5344 0.5344
FA => FB \u6a19\u97f3\u6846\u6311\u9078\u7684\u8f49\u63db\u8aa4\u5dee\u5e73\u5747\u503c\u6703\u8b8a\u5927\u6210\u70ba 0.6029\uff0c\u9019\u660e\u986f\u6bd4\u8868 3 \u7684 0.5414 \u589e\u52a0\u4e86\u8a31\u591a\uff1b 0.5807 0.5806 0.5805 0.5773 0.5768 0.5768 \u5e73\u5747 0.5449 0.5447 0.5447 0.5422 0.5414 \u518d\u8005\uff0c\u8907\u5408\u578b\u76ee\u6a19\u97f3\u6846\u6311\u9078\u7684\u8f49\u63db\u8aa4\u5dee\u5e73\u5747\u503c\u4e5f\u8b8a\u5f97\u66f4\u5927\uff0c0.6121\u3002\u6839\u64da\u9019\u4e8c\u500b\u8b8a\u5927\u5f88 0.5414 \u591a\u7684\u8aa4\u5dee\u5e73\u5747\u503c\uff0c\u76f4\u89ba\u4e0a\u6703\u8b93\u4eba\u8a8d\u70ba\u57fa\u672c\u578b\u8207\u8907\u5408\u578b\u76ee\u6a19\u97f3\u6846\u6311\u9078\u6cd5\uff0c\u6240\u8f49\u63db\u51fa\u7684\u8a9e\u97f3
\u61c9\u6703\u5728\u97f3\u8272\u76f8\u4f3c\u5ea6\u548c\u8a9e\u97f3\u54c1\u8cea\u4e0a\u8870\u6e1b\u5f88\u591a\uff0c\u7136\u800c\u5be6\u969b\u4e0a\u7576\u6211\u5011\u53bb\u807d\u8f49\u63db\u51fa\u7684\u8a9e\u97f3\u6642\uff0c\u767c
4.3 PCA\u8f49\u63db\u4e4b\u5fc5\u8981\u6027\u6e2c\u8a66 \u73fe\u7d93\u7531\u57fa\u672c\u578b\u6216\u8907\u5408\u578b\u76ee\u6a19\u97f3\u6846\u6311\u9078\u6240\u8f49\u63db\u51fa\u7684\u8a9e\u97f3\uff0c\u8a9e\u97f3\u54c1\u8cea\u537b\u662f\u6703\u8b8a\u5f97\u66f4\u70ba\u6e05\u6670(\u61c9
\u5c0d\u65bc\u5716 2 \u7684\u6d41\u7a0b\u88e1\uff0c\u52a0\u5165\"PCA \u4fc2\u6578\u8f49\u63db\"\u8207\"PCA \u53cd\u8f49\u63db\"\u65b9\u584a\u662f\u5426\u70ba\u5fc5\u8981\u7684\uff1f\u5728\u6b64 \u662f\u4f7f\u7528\u771f\u5be6\u97f3\u6846 DCC \u7684\u7de3\u6545)\uff0c\u4e26\u4e14\u97f3\u8272\u76f8\u4f3c\u5ea6\u4e5f\u6c92\u6709\u8870\u6e1b\u3002\u6240\u4ee5\uff0c\u57fa\u65bc\u91cf\u6e2c\u5169 DCC \u5411
\u6211\u5011\u4ee5\u91cf\u6e2c\u8a9e\u97f3\u8f49\u63db\u7684\u5e73\u5747\u8f49\u63db\u8aa4\u5dee\u7684\u65b9\u5f0f\uff0c\u4f86\u6bd4\u8f03 PCA \u4fc2\u6578\u8f49\u63db\u52a0\u5165\u8207\u4e0d\u52a0\u5165\u7684\u512a\u52a3\uff0c \u91cf\u4e4b\u9593\u5e7e\u4f55\u8ddd\u96e2\u7684\u8f49\u63db\u8aa4\u5dee\u5e73\u5747\u503c\uff0c\u5176\u6578\u503c\u5927\u5c0f\u548c\u8a9e\u97f3\u54c1\u8cea\u4e4b\u9593\u4f3c\u4e4e\u4e0d\u662f\u6b63\u6bd4\u4f8b\u7684\u95dc
\u6240\u7528\u7684\u6e2c\u8a66\u8a9e\u6599\u548c\u8aa4\u5dee\u7684\u91cf\u6e2c\u65b9\u5f0f\uff0c\u548c 4.2 \u7bc0\u88e1\u6558\u8ff0\u7684\u4e00\u6a23\uff0c\u4ea6\u5373\u4f7f\u7528\u5e73\u884c\u8a9e\u6599\u6700\u5f8c 25 \u4fc2\u3002
\u53e5\u4f86\u4f5c\u5916\u90e8\u6e2c\u8a66\uff0c\u4e26\u4e14\u91cf\u6e2c\u8f49\u63db\u5f97\u5230\u7684 DCC \u5411\u91cf\u8207\u5c0d\u61c9\u7684\u76ee\u6a19\u97f3\u6846 DCC \u5411\u91cf\u4e4b\u9593\u7684\u5e7e
\u4f55\u8ddd\u96e2\uff0c\u518d\u8a08\u7b97\u5168\u90e8\u97f3\u6846\u7684\u5e73\u5747\u8aa4\u5dee\u3002\u6b64\u5916\uff0c\u76f4\u65b9\u5716\u7b49\u5316\u4e5f\u5206\u6210\u4e09\u7a2e\u5340\u9593\u6578\u4f86\u4f5c\u5be6\u9a57\uff0c
\u5373 32\u300164\u3001\u8207 128 \u500b\u5340\u9593\u3002\u7d93\u904e\u5be6\u9a57\u91cf\u6e2c\u5f8c\uff0c\u6211\u5011\u5f97\u5230\u5982\u8868 3 \u6240\u793a\u7684\u5e73\u5747\u8f49\u63db\u8aa4\u5dee\u503c\uff0c
", "html": null, "type_str": "table", "text": "X \u8868\u793a\u4e00\u500b \u4f86\u6e90\u8a9e\u8005\u97f3\u6846\u7684 DCC \u6216 CDF \u4fc2\u6578\u5411\u91cf\uff0c\u800c Y \u8868\u793a\u7d93\u7531 LMR \u5c0d\u6620\u51fa\u7684\u4fc2\u6578\u5411\u91cf\u3002 \u7684\u8f49\u63db\u8aa4\u5dee\u5e73\u5747\u503c\u53ef\u4ee5\u770b\u51fa\uff0c\u5716 2 \u4e2d\u7684 PCA \u4fc2\u6578\u8f49\u63db\u8207\u53cd\u8f49\u63db\u65b9\u584a\u82e5\u662f\u4f7f\u7528 \u5171\u7528\u7684 PCA \u4e3b\u6210\u5206\u5411\u91cf\uff0c\u5247\u5e73\u5747\u8f49\u63db\u8aa4\u5dee\u53ef\u5f9e 0.5447 \u964d\u5230 0.5414\uff0c\u9019\u8aaa\u660e\u4e86\u4f7f\u7528\u5171\u7528 \u7684 PCA \u4e3b\u6210\u5206\u5411\u91cf\uff0c\u53ef\u4ee5\u7565\u5fae\u63d0\u5347\u4f86\u6e90\u8207\u76ee\u6a19\u97f3\u6846\u4e4b\u9593 PCA \u4fc2\u6578\u7684\u76f8\u95dc\u6027\uff0c\u800c\u7a0d\u5fae\u6e1b \u5c0f LMR \u5c0d\u6620\u7684\u8aa4\u5dee\u3002\u6b64\u5916\uff0c\u95dc\u65bc\u76f4\u65b9\u5716\u7b49\u5316\u7684\u5340\u9593\u6578\u7684\u8a2d\u5b9a\uff0c\u4f9d\u64da\u8868 2 \u7684\u8f49\u63db\u8aa4\u5dee\u5e73 \u5747\u503c\u53ef\u77e5\uff0c\u8a2d\u70ba 64 \u5340\u9593\u6216 128 \u5340\u9593\u662f\u6c92\u6709\u5dee\u7570\u7684\u3002 LMR \u5c0d\u6620\u51fa\u7684 DCC \u5411\u91cf\u3001\u7d93\u76ee\u6a19\u97f3\u6846\u6311\u9078\u5f97\u5230\u7684 DCC \u5411\u91cf\u3001\u53ca\u8a72\u76ee \u6a19\u97f3\u6846\u7684 DCC \u5411\u91cf\uff0c\u8a08\u7b97\u51fa\u4e09\u8005\u7684\u983b\u8b5c\u5305\u7d61\u66f2\u7dda\u4e26\u4e14\u756b\u51fa\u4f86\u4f5c\u6bd4\u8f03\uff0c\u7d50\u679c\u6211\u5011\u767c\u73fe\u4e86\u4e00 \u500b\u73fe\u8c61\u53ef\u7528\u4ee5\u89e3\u91cb\u524d\u8ff0\u7684\u4e0d\u4e00\u81f4\u6027\u3002\u4e00\u500b\u4f8b\u5b50\u5982\u5716 4 \u6240\u793a\uff0c\u5716 4 \u4e2d\u7684\u865b\u7dda\u4ee3\u8868/song/\u97f3\u7bc0 \u7684\u4e00\u500b\u76ee\u6a19\u97f3\u6846\u7684\u983b\u8b5c\u5305\u7d61\u7dda\uff0c\u6dfa\u7070\u8272\u5be6\u7dda\u4ee3\u8868 LMR \u5c0d\u6620\u5f97\u5230\u7684 DCC \u5411\u91cf\u6240\u7b97\u51fa\u7684\u983b \u8b5c\u5305\u7d61\u7dda\uff0c\u6df1\u9ed1\u8272\u5be6\u7dda\u5247\u4ee3\u8868\u76ee\u6a19\u97f3\u6846\u6311\u9078\u5f97\u5230\u7684 DCC \u5411\u91cf\u6240\u7b97\u51fa\u7684\u983b\u8b5c\u5305\u7d61\u7dda\uff0c\u6bd4\u8f03 \u9019\u4e09\u689d\u5305\u7d61\u7dda\uff0c\u6211\u5011\u53ef\u767c\u73fe\u5728\u6a6b\u8ef8\u983b\u7387\u7bc4\u570d 2,500 Hz \u81f3 4,500 Hz \u4e4b\u9593\uff0c\u6df1\u9ed1\u8272\u5be6\u7dda\u7684\u5f62 \u72c0\u6bd4\u8d77\u6dfa\u7070\u8272\u5be6\u7dda\u7684\u5f62\u72c0\u8f03\u70ba\u63a5\u8fd1\u865b\u7dda\u66f2\u7dda\u7684\u5171\u632f\u5cf0\u8d77\u4f0f\uff0c\u6240\u4ee5\u9019\u53ef\u4ee5\u89e3\u91cb\u70ba\u4ec0\u9ebc\u76ee\u6a19 \u97f3\u6846\u6311\u9078\u80fd\u5920\u6539\u9032\u8f49\u63db\u51fa\u8a9e\u97f3\u7684\u54c1\u8cea\uff1b\u6b64\u5916\uff0c\u5728\u6a6b\u8ef8\u983b\u7387\u7bc4\u570d 5,500 Hz \u81f3 11,000 Hz \u4e4b \u9593\uff0c\u6dfa\u7070\u8272\u5be6\u7dda\u6703\u6bd4\u6df1\u9ed1\u8272\u5be6\u7dda\u66f4\u70ba\u9760\u8fd1\u865b\u7dda\u66f2\u7dda\uff0c\u6240\u4ee5\u9019\u53ef\u4ee5\u89e3\u91cb\u70ba\u4ec0\u9ebc LMR \u5c0d\u6620 \u6240\u5c0e\u5165\u7684\u8f49\u63db\u8aa4\u5dee\uff0c\u6703\u6bd4\u76ee\u6a19\u97f3\u6846\u6311\u9078\u6240\u5c0e\u5165\u7684\u8f49\u63db\u8aa4\u5dee\u4f86\u5f97\u5c0f\u3002" }, "TABREF36": { "num": null, "content": "
112\u57fa\u65bc\u97f3\u6bb5\u5f0fLMR\u5c0d\u6620\u4e4b\u8a9e\u97f3\u8f49\u63db\u65b9\u6cd5\u7684\u6539\u9032111 \u53e4\u9d3b\u708e\u3001\u5f35\u5bb6\u7dad
\u500b\u8a55\u5206\uff0c\u4ee5\u986f\u793a\u5de6\u908a\u97f3\u6a94\u7684\u8a9e\u97f3\u54c1\u8cea\u6bd4\u8d77\u53f3\u908a\u97f3\u6a94\u7684\u54c1\u8cea\u662f\u597d\u6216\u58de\uff1b\u7b2c\u4e8c\u9805\u807d\u6e2c\u5be6\u9a57\u88e1\uff0c 5. \u7d50\u8ad6
\u7dad\u983b\u8b5c\u4fc2\u6578\u7684\u8b8a\u7570\u6578\u3002 \u5c0d\u65bc\u524d\u9762\u63d0\u5230\u7684\u56db\u7a2e\u8655\u7406\u6d41\u7a0b\uff0c\u5373\u4f5c\u8207\u4e0d\u4f5c\u76f4\u65b9\u5716\u7b49\u5316\u3001\u4f5c\u8207\u4e0d\u4f5c\u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b \u56db\u7a2e\u7d44\u5408\uff0c\u6211\u5011\u4f9d\u64da\u516c\u5f0f(13)\u53bb\u91cf\u6e2c\u8f49\u63db\u5f8c\u97f3\u6846\u8207\u76ee\u6a19\u97f3\u6846\u4e4b\u9593\u7684\u8b8a\u7570\u6578\u6bd4\u503c\uff0c\u7d50\u679c\u5f97 \u53d7\u6e2c\u8005\u5148\u5f8c\u9ede\u64ad(WD_1, WH_1)\u8207(WD_2, WH_2)\u5169\u5c0d\u97f3\u6a94\u4f86\u8a66\u807d\uff0c\u7136\u5f8c\u53d7\u6e2c\u8005\u5206\u5225\u7d66\u6bcf \u6211\u5011\u7814\u7a76\u6539\u9032\u4e86\u7dda\u6027\u591a\u8b8a\u91cf\u8ff4\u6b78(LMR)\u983b\u8b5c\u5c0d\u6620\u70ba\u57fa\u790e\u7684\u8a9e\u97f3\u8f49\u63db\u65b9\u6cd5\uff0c\u5728\u8655\u7406\u6d41\u7a0b\u4e2d \u4e00\u5c0d\u97f3\u6a94\u4e00\u500b\u8a55\u5206\uff0c\u4ee5\u986f\u793a\u5de6\u908a\u97f3\u6a94\u7684\u8a9e\u97f3\u54c1\u8cea\u6bd4\u8d77\u53f3\u908a\u97f3\u6a94\u7684\u54c1\u8cea\u662f\u597d\u6216\u58de\u3002\u5728\u4e8c\u9805 \u52a0\u5165\u76f4\u65b9\u5716\u7b49\u5316\u53ca\u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u8655\u7406\u6b65\u9a5f\uff0c\u7528\u4ee5\u63d0\u5347\u8f49\u63db\u51fa\u8a9e\u97f3\u7684\u54c1\u8cea\u3002\u7576\u6211\u5011\u5728\u5716 \u807d\u6e2c\u5be6\u9a57\u88e1\uff0c\u53d7\u6e2c\u8005\u90fd\u662f\u540c\u6a23\u7684 12 \u4f4d\u5b78\u751f\uff0c\u4ed6\u5011\u5927\u90e8\u5206\u90fd\u4e0d\u719f\u6089\u8a9e\u97f3\u8f49\u63db\u4e4b\u7814\u7a76\u9818\u57df\uff0c \u4e00\u6d41\u7a0b\u7684 DCC \u4f30\u8a08\u8207 LMR \u5c0d\u6620\u4e4b\u9593\u63d2\u5165\"\u76f4\u65b9\u5716\u7b49\u5316\"\u8655\u7406(\u5305\u542b PCA \u4fc2\u6578\u8f49\u63db\u8207 \u81f3\u65bc\u8a55\u5206\u7684\u6a19\u6e96\u662f\uff0c2 (-2)\u5206\u8868\u793a\u53f3(\u5de6)\u908a\u97f3\u6a94\u7684\u8a9e\u97f3\u54c1\u8cea\u6bd4\u5de6(\u53f3)\u908a\u97f3\u6a94\u7684\u660e\u986f\u5730\u597d\uff0c1 CDF \u4fc2\u6578\u8f49\u63db)\u4e4b\u5f8c\uff0c\u96d6\u7136\u8a9e\u97f3\u8f49\u63db\u7684\u5e73\u5747\u8aa4\u5dee\u8ddd\u96e2\u6703\u7531 0.5382 (\u53e4\u9d3b\u708e\u7b49\uff0c2012)\u8b8a\u5927\u6210 (-1)\u5206\u8868\u793a\u53f3(\u5de6)\u908a\u97f3\u6a94\u7684\u8a9e\u97f3\u54c1\u8cea\u6bd4\u5de6(\u53f3)\u908a\u97f3\u6a94\u7684\u7a0d\u70ba\u597d\u4e00\u9ede\uff0c0 \u5206\u8868\u793a\u5206\u8fa8\u4e0d\u51fa\u5de6\u3001 \u70ba 0.5414\uff0c\u4f46\u662f\u4e3b\u89c0\u807d\u6e2c\u5be6\u9a57\u7684\u7d50\u679c\u986f\u793a\uff0c\u8f49\u63db\u51fa\u8a9e\u97f3\u7684\u54c1\u8cea\u537b\u662f\u6bd4\u672a\u52a0\u76f4\u65b9\u5716\u7b49\u5316\u6642 \u53f3\u5169\u97f3\u6a94\u7684\u8a9e\u97f3\u54c1\u8cea\u3002\u5728\u4e8c\u9805\u807d\u6e2c\u5be6\u9a57\u4e4b\u5f8c\uff0c\u6211\u5011\u5c07\u53d7\u6e2c\u8005\u6240\u7d66\u7684\u8a55\u5206\u4f5c\u6574\u7406\uff0c\u7d50\u679c\u5f97 \u7684\u597d\uff0c\u6240\u4ee5\u76f4\u65b9\u5716\u7b49\u5316\u8655\u7406\u53ef\u7528\u4ee5\u7d13\u89e3 LMR \u5c0d\u6620\u6240\u9020\u6210\u7684\u983b\u8b5c\u904e\u5ea6\u5e73\u6ed1\u4e4b\u554f\u984c\u3002\u6b64\u5916\uff0c \u5230\u5982\u8868 6 \u6240\u793a\u7684\u5e73\u5747\u8a55\u5206\u3002\u5f9e\u8868 6 \u7684\u4e8c\u9805\u5e73\u5747\u8a55\u5206(\u5373 0.583 \u8207 0.375)\u53ef\u5f97\u77e5\uff0c\u8a55\u5206\u5206\u6578 \u95dc\u65bc\u4f86\u6e90\u8a9e\u8005\u548c\u76ee\u6a19\u8a9e\u8005\u662f\u5426\u61c9\u5171\u7528\u4e3b\u6210\u5206\u5411\u91cf\u7684\u7591\u554f\uff0c\u5be6\u9a57\u7684\u7d50\u679c\u986f\u793a\uff0c\u8b93\u5169\u8a9e\u8005\u5171 \u90fd\u662f\u6b63\u503c\uff0c\u8868\u793a\u5148\u4f5c\u76f4\u65b9\u5716\u7b49\u5316\u518d\u4f5c LMR \u5c0d\u6620\uff0c\u6bd4\u8d77 DCC \u5411\u91cf\u76f4\u63a5\u4f5c LMR \u5c0d\u6620\u6703\u5f97 \u5230\u66f4\u597d\u4e00\u4e9b\u7684\u8a9e\u97f3\u54c1\u8cea\uff1b\u6b64\u5916\uff0c\u7b2c\u4e8c\u9805\u807d\u6e2c\u7684\u5e73\u5747\u8a55\u5206(0.375)\uff0c\u6bd4\u8d77\u7b2c\u4e00\u9805\u807d\u6e2c\u7684\u5e73\u5747 \u7528\u4e3b\u6210\u5206\u5411\u91cf\u662f\u6bd4\u8f03\u597d\u7684\u4f5c\u6cd5\uff0c\u53ef\u8b93\u8a9e\u97f3\u8f49\u63db\u7684\u5e73\u5747\u8aa4\u5dee\u5f9e 0.5447 \u6e1b\u5c0f\u6210 0.5414\u3002
\u8a55\u5206(0.583)\u8981\u7a0d\u5fae\u4f4e\u4e00\u9ede\uff0c\u8868\u793a\u5728\u4f5c\u904e\u76ee\u6a19\u97f3\u6846\u6311\u9078\u7684\u8655\u7406\u4e4b\u5f8c\uff0c\u76f4\u65b9\u5716\u7b49\u5316\u6240\u5e36\u4f86\u7684 \u8a9e\u97f3\u54c1\u8cea\u6539\u9032\uff0c\u5c31\u6703\u8b8a\u5f97\u8f03\u4e0d\u660e\u986f\u3002 \u8868 6. \u8a9e\u97f3\u54c1\u8cea\u807d\u6e2c--\u6bd4\u8f03 DCC \u8207 HEQ DCC vs. HEQ DCC vs. HEQ \u5230\u5982\u8868 5 \u97f3\u54c1\u8cea\u7684\u8870\u9000? \u9019\u5c1a\u9700\u9032\u884c\u807d\u6e2c\u5be6\u9a57\u4f86\u9a57\u8b49\u3002 (\u7121 \u76ee\u6a19\u97f3\u6846\u6311\u9078) (\u6709 \u76ee\u6a19\u97f3\u6846\u6311\u9078)
\u8868 5. \u8b8a\u7570\u6578\u6bd4\u503c\u4e4b\u6bd4\u8f03 \u7121 \u76ee\u6a19\u97f3\u6846\u6311\u9078 \u5e73\u5747\u8a55\u5206 AVG (STD) 0.583 (0.776)\u6709 \u76ee\u6a19\u97f3\u6846\u6311\u9078 0.375 (0.824)
\u914d\u5c0d \u63a5\u8457\uff0c\u6211\u5011\u518d\u5c07\u524d\u8ff0\u7684 4 \u7d44\u97f3\u6a94\u4f5c\u7de8\u6392\u4ee5\u9032\u884c\u53e6\u4e8c\u9805\u807d\u6e2c\u5be6\u9a57\uff0c\u5728\u7b2c\u4e09\u9805\u807d\u6e2c\u5be6\u9a57 DCC+LMR HEQ+LMR DCC+LMR HEQ+LMR MA=> MB 0.2463 0.1671 0.5893 \u88e1\uff0c\u53d7\u6e2c\u8005\u5148\u3001\u5f8c\u9ede\u64ad(VD_1, WD_1)\u8207(VD_2, WD_2)\u5169\u5c0d\u97f3\u6a94\u4f86\u8a66\u807d\uff0c\u7136\u5f8c\u53d7\u6e2c\u8005\u5206\u5225 0.5245 MA=> FA 0.1994 0.1290 0.5182 \u7d66\u6bcf\u4e00\u5c0d\u97f3\u6a94\u4e00\u500b\u8a55\u5206\uff0c\u4ee5\u986f\u793a\u5de6\u908a\u97f3\u6a94\u7684\u8a9e\u97f3\u54c1\u8cea\u6bd4\u8d77\u53f3\u908a\u97f3\u6a94\u7684\u54c1\u8cea\u662f\u597d\u6216\u58de\uff1b\u5728 0.4485 \u7b2c\u56db\u9805\u807d\u6e2c\u5be6\u9a57\u88e1\uff0c\u53d7\u6e2c\u8005\u5148\u5f8c\u9ede\u64ad(VH_1, WH_1)\u8207(VH_2, WH_2)\u5169\u5c0d\u97f3\u6a94\u4f86\u8a66\u807d\uff0c\u7136
FA => MA \u5f8c\u53d7\u6e2c\u8005\u5206\u5225\u7d66\u6bcf\u4e00\u5c0d\u97f3\u6a94\u4e00\u500b\u8a55\u5206\uff0c\u4ee5\u986f\u793a\u5de6\u908a\u97f3\u6a94\u7684\u8a9e\u97f3\u54c1\u8cea\u6bd4\u8d77\u53f3\u908a\u97f3\u6a94\u7684\u54c1\u8cea 0.2367 0.1775 0.5814 0.5383
FA => FB \u662f\u597d\u6216\u58de\u3002\u5728\u7b2c\u4e09\u3001\u7b2c\u56db\u9805\u807d\u6e2c\u5be6\u9a57\u88e1\uff0c\u53d7\u6e2c\u8005\u4e5f\u5171\u6709 12 \u4f4d\u5b78\u751f\uff0c\u4ed6\u5011\u5927\u90e8\u5206\u4e0d\u719f\u6089\u8a9e 0.2063 0.1375 0.5648 0.5303
\u5e73\u5747 \u97f3\u8f49\u63db\u4e4b\u7814\u7a76\u9818\u57df\uff0c\u81f3\u65bc\u8a55\u5206\u7684\u6a19\u6e96\u8207\u5206\u6578\u7bc4\u570d\u5247\u548c\u524d\u4e00\u6bb5\u6240\u8aaa\u7684\u4e00\u6a23\u3002\u5728\u9019\u4e8c\u9805\u807d\u6e2c 0.2222 0.1528 0.5634 0.5104
\u5be6\u9a57\u4e4b\u5f8c\uff0c\u6211\u5011\u5c07\u53d7\u6e2c\u8005\u6240\u7d66\u7684\u8a55\u5206\u4f5c\u6574\u7406\uff0c\u7d50\u679c\u5f97\u5230\u5982\u8868 7 \u6240\u793a\u7684\u5e73\u5747\u8a55\u5206\u3002\u5f9e\u8868 7
\u4e8c\u9805\u5e73\u5747\u8a55\u5206 0.917 \u8207 1.125 \u53ef\u5f97\u77e5\uff0c\u53ea\u8981\u52a0\u5165\u76ee\u6a19\u97f3\u6846\u6311\u9078\u7684\u8655\u7406\uff0c\u5c31\u53ef\u8b93\u8f49\u63db\u51fa\u8a9e\u97f3 4.5 \u8a9e\u97f3\u54c1\u8cea\u4e3b\u89c0\u807d\u6e2c \u7684\u54c1\u8cea\u7372\u5f97\u660e\u986f\u7684\u63d0\u5347\uff0c\u4e26\u4e14\u9019\u6a23\u7684\u63d0\u5347\u8981\u6bd4\u8868 6 \u88e1\u7684\u66f4\u660e\u986f\u5f88\u591a\uff0c\u6240\u4ee5\u9019\u4e8c\u9805\u807d\u6e2c\u5be6 \u6211\u5011\u4f7f\u7528\u672a\u53c3\u52a0\u6a21\u578b\u8a13\u7df4\u7684\u4f86\u6e90\u8a9e\u53e5\uff0c\u4f86\u6e96\u5099 4 \u7d44\u4f5c\u8a9e\u97f3\u54c1\u8cea\u807d\u6e2c\u7684\u97f3\u6a94\uff0c\u9019 4 \u7d44\u97f3\u6a94 \u9a57\u7684\u7d50\u679c\uff0c\u548c\u8868 5 \u88e1\u91cf\u6e2c\u51fa\u7684 VR \u503c\u662f\u76f8\u4e92\u547c\u61c9\u7684\u3002 \u7684\u4ee3\u865f\u662f VD\u3001VH\u3001WD\u3001WH\uff0c\u4e26\u4e14\u6bcf\u4e00\u7d44\u4e2d\u542b\u6709\u5169\u500b\u97f3\u6a94\uff0c\u5206\u5225\u662f\u4f7f\u7528 MA=>MB \u8207 \u8868 7. \u8a9e\u97f3\u54c1\u8cea\u807d\u6e2c--\u6bd4\u8f03\u6709\u3001\u7121\u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u5dee\u7570 MA=>FA \u4e4b\u8a9e\u8005\u914d\u5c0d\u4f86\u4f5c\u8a9e\u97f3\u8f49\u63db\u800c\u7522\u751f\u51fa\u7684\u97f3\u6a94\uff0c\u5728\u6b64\u4ee5_1 \u8207_2 \u4e4b\u4ee3\u865f\u4f86\u4f5c\u5340\u5206\u3002 TFS (Target Frame Selection) TFS_no vs. TFS_yes (DCC+LMR) TFS_no vs. TFS_yes (HEQ + LMR) \u4ee3\u865f VD \u8207 \u4f7f\u7528\u9019 4 \u7d44\u97f3\u6a94\uff0c\u6211\u5011\u5148\u7de8\u6392\u6210\u4e8c\u9805\u7684\u807d\u6e2c\u5be6\u9a57\uff0c\u7b2c\u4e00\u9805\u807d\u6e2c\u5be6\u9a57\u88e1\uff0c\u53d7\u6e2c\u8005\u5148\u3001 \u5e73\u5747\u8a55\u5206 AVG (STD) 0.917 (0.584) 1.125 (0.680)
\u5f8c\u9ede\u64ad(VD_1, VH_1)\u8207(VD_2, VH_2)\u5169\u5c0d\u97f3\u6a94\u4f86\u8a66\u807d\uff0c\u7136\u5f8c\u53d7\u6e2c\u8005\u5206\u5225\u7d66\u6bcf\u4e00\u5c0d\u97f3\u6a94\u4e00
", "html": null, "type_str": "table", "text": "\u6240\u793a VR \u503c\u3002\u7531\u8868 5 \u7684 VR \u503c\u53ef\u767c\u73fe\uff0c\u82e5\u4e0d\u4f5c\u76ee\u6a19\u97f3\u6846\u6311\u9078\uff0c\u5247\u5e73\u5747 VR \u503c\u53ea \u6709 0.2 \u5de6\u53f3\uff0c\u4f46\u662f\u7576\u52a0\u5165\u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u5f8c\uff0c\u5c31\u53ef\u8b93\u5e73\u5747 VR \u503c\u63d0\u5347\u5230 0.5 \u4ee5\u4e0a\uff0c\u6240\u4ee5\u5ba2 \u89c0\u4e0a\u4f86\u770b\uff0c\u76ee\u6a19\u97f3\u6846\u6311\u9078\u4e4b\u52d5\u4f5c\u61c9\u53ef\u8b93\u8a9e\u97f3\u54c1\u8cea\u7372\u5f97\u660e\u986f\u7684\u63d0\u5347\u3002\u81f3\u65bc\u76f4\u65b9\u5716\u7b49\u5316\uff0c\u505a \u4e86\u6b64\u7a2e\u8655\u7406\u53cd\u800c\u8b93 VR \u503c\u4e0b\u964d\u4e00\u4e9b\uff0c\u800c VR \u503c\u4e0b\u964d\u4e00\u4e9b\u662f\u5426\u5728\u4e3b\u89c0\u807d\u6e2c\u4e0a\u5c31\u6703\u611f\u89ba\u5230\u8a9e VH \u4e2d\u7684 V \u8868\u793a\u672a\u4f5c\u76ee\u6a19\u97f3\u6846\u6311\u9078\uff0c\u800c WD \u8207 WH \u4e2d\u7684 W \u5247\u8868\u793a\u6709\u4f5c\u76ee\u6a19 \u97f3\u6846\u6311\u9078\uff1b\u6b64\u5916\uff0cVD \u8207 WD \u4e2d\u7684 D \u8868\u793a\u76f4\u63a5\u62ff DCC \u5411\u91cf\u53bb\u4f5c LMR \u5c0d\u6620\uff0c\u5c31\u5982\u5716 1 \u4e4b \u8655\u7406\u6d41\u7a0b\uff0c\u800c VH \u8207 WH \u4e2d\u7684 H \u8868\u793a DCC \u5411\u91cf\u8981\u5148\u4f5c PCA \u4fc2\u6578\u8f49\u63db\u53ca CDF \u4fc2\u6578\u8f49\u63db\uff0c \u7136\u5f8c\u624d\u4f5c LMR \u5c0d\u6620\uff0c\u5c31\u5982\u5716 2 \u4e4b\u8655\u7406\u6d41\u7a0b\u3002\u9019 4 \u7d44\u97f3\u6a94\u53ef\u5f9e\u5982\u4e0b\u7db2\u9801\u53bb\u4e0b\u8f09\u8a66\u807d: http://guhy.csie.ntust.edu.tw/vcHeqLmr/\u3002" }, "TABREF37": { "num": null, "content": "
\u96dc\u8a0a\u74b0\u5883\u4e0b\u61c9\u7528\u7dda\u6027\u4f30\u6e2c\u7de8\u78bc\u65bc\u7279\u5fb5\u6642\u5e8f\u5217\u4e4b\u5f37\u5065\u6027\u8a9e\u97f3\u8fa8\u8b58117
spectrum replacement,
\u3001\u5012\u983b\u8b5c\u589e\u76ca\u6b63\u898f\u5316\u6cd5(cepstral gain
normalization, CGN) (Yoshizawa et al., 2004) \u3001\u5012\u983b\u8b5c\u5e73\u5747\u503c\u8207\u8b8a\u7570\u6578\u6b63\u898f\u5316\u6cd5(cepstral
mean and variance normalization, CMVN)(Tiberewala & Hermansky, 1997)\u3001\u5012\u983b\u8b5c\u7d71
\u8a08\u5716\u6b63\u898f\u5316\u6cd5(cepstral histogram normalization, CHN)(Hilger & Ney, 2006)\u3001\u5012\u983b\u8b5c
\u5f62\u72c0\u6b63\u898f\u5316\u6cd5(cepstral shape normalization, CSN)(Du & Wang, 2008)\u3001\u5012\u983b\u8b5c\u5e73\u5747\u503c
\u8207 \u8b8a \u7570 \u6578 \u6b63 \u898f \u5316 \u7d50 \u5408 \u81ea \u52d5 \u56de \u6b78 \u52d5 \u614b \u5e73 \u5747 \u6ffe \u6ce2 \u5668 \u6cd5 ( cepstral mean and variance
normalization plus auto-regressive-moving average filtering, MVA) (Chen & Bilmes, 2007)
\u7b49\uff0c\u9644\u5e36\u4e00\u63d0\u7684\u662f\uff0c\u8fd1\u5e7e\u5e74\u4f86\u672c\u8a9e\u97f3\u5be6\u9a57\u5ba4\u91dd\u5c0d\u5012\u983b\u8b5c\u4e4b\u6642\u9593\u5e8f\u5217\u57df\u958b\u767c\u4e86\u8a31\u591a\u6b64\u985e\u7684
\u5f37\u5065\u578b\u5305\u62ec\u4e86\uff1a\u5ee3\u7fa9\u5c0d\u6578\u57df\u8abf\u8b8a\u983b\u8b5c\u5e73\u5747\u503c\u6b63\u898f\u5316\u6cd5(generalized-log magnitude spectrum
mean normalization, GLMSMN)(Hsu et al., 2012)\u3001\u8abf\u8b8a\u983b\u8b5c\u6307\u6578\u6b0a\u91cd\u6cd5(modulation
spectrum exponential weighting, MSEW) (Hung et al., 2012a) \u3001\u8abf\u8b8a\u983b\u8b5c\u66ff\u4ee3\u6cd5 (modulation
", "html": null, "type_str": "table", "text": "MSR)(Hung et al., 2012b) \u3001\u8abf\u8b8a\u983b\u8b5c\u6ffe\u6ce2\u6cd5(modulation spectrum filtering, MSF)(Hung et al., 2012b) \u3001\u5206\u983b\u5e36\u8abf\u8b8a\u983b\u8b5c\u88dc\u511f (Sub-band modulation spectrum compensation)" }, "TABREF41": { "num": null, "content": "
126 IJCLCLP 2013 Index-2 \u96dc\u8a0a\u74b0\u5883\u4e0b\u61c9\u7528\u7dda\u6027\u4f30\u6e2c\u7de8\u78bc\u65bc\u7279\u5fb5\u6642\u5e8f\u5217\u4e4b\u5f37\u5065\u6027\u8a9e\u97f3\u8fa8\u8b58\u8303\u9865\u9a30 \u7b49 129
4.1 \u968e\u6578\u70ba2\u4e4bLPCF\u6cd5\u904b\u7528\u65bcMFCC\u57fa\u790e\u7279\u5fb5 \u8868 1 2)\u8655\u7406\u5f8c\u7684\u7279\u5fb5\uff0c\u5728\u4e0d\u540c\u7d44 \u5225\u4e4b\u4e0b\u3001\u53d6 5 \u7a2e\u8a0a\u96dc\u6bd4(20 dB, 15 dB, 10 dB, 5 dB \u8207 0 dB)\u4e4b\u8fa8\u8b58\u7387(%)\u5e73\u5747 \u6bd4\u8f03 Set A Set B Set C Avg MFCC 59.24 56.37 67.53 59.75 LPCF 63.90 61.96 66.44 63.63 4.2 \u968e\u6578\u70ba2\u4e4bLPCF\u6cd5\u904b\u7528\u65bc\u7d93CMVN\u3001CHN\u6216MVA\u9810\u8655\u7406\u4e4bMFCC\u7279 \u5fb5 \u4e09\u7a2e\u8457\u540d\u7684\u6642\u9593\u5e8f\u5217\u8655\u7406\u6280\u8853\uff1aCMVN\u3001CHN \u8207 5 \u7a2e\u8a0a\u96dc 5. \u7d50\u8ad6 Huang, Chu-Ren see Hong, Jia-Fei, 18(2): 19-34 Huang, Yi-Chin Publications of the Association for \u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 Computational Linguistics and Chinese Language Processing \u76f8\u95dc\u51fa\u7248\u54c1\u50f9\u683c\u8868\u53ca\u8a02\u8cfc\u55ae see Cheng, Ju-Yun, 18(4): 63-80 \u6bd4(20 dB, 15 dB, 10 dB, 5 dB \u8207 0 dB)\u4e4b\u8fa8\u8b58\u7387(%)\u5e73\u5747\u6bd4\u8f03 Set A Set B Set C Avg CMVN 73.83 75.01 75.09 74.55 MVN+LPCF 77.14 78.84 77.67 77.93 \u8868 3. CHN \u6cd5\u8207 CHN \u4e32\u806f LPCF \u6cd5(\u968e\u6578\u70ba 2) \uff0c\u5728\u4e0d\u540c\u7d44\u5225\u4e4b\u4e0b\u3001\u53d6 5 \u7a2e\u8a0a\u96dc\u6bd4 (20 dB, 15 dB, 10 dB, 5 dB \u8207 0 dB)\u4e4b\u8fa8\u8b58\u7387(%)\u5e73\u5747\u6bd4\u8f03 Set A Set B Set C Avg CHN 81.42 83.34 81.51 82.21 CHN+LPCF 77.14 78.84 77.67 77.93 (20 dB, 15 dB, 10 dB, 5 dB \u8207 0 dB)\u4e4b\u8fa8\u8b58\u7387(%)\u5e73\u5747\u6bd4\u8f03 Set A Set B Set C Avg MVA 78.15 79.17 79.12 78.75 MVA+LPCF 78.15 79.17 79.12 78.75 2 3 4 5 6 7 8 9 10 \u8fa8\u8b58\u7387(%) Hung, Jeih-weih \u7de8\u865f \u66f8\u76ee \u6703 \u54e1 \u975e\u6703\u54e1 \u518a\u6578 \u91d1\u984d \u5728\u672c\u8ad6\u6587\u4e2d\uff0c\u6211\u5011\u6240\u63d0\u51fa\u4e00\u5957\u57fa\u65bc\u7dda\u6027\u4f30\u6e2c\u7de8\u78bc\u7684\u65b0\u65b9\u6cd5(LPCF)\uff0c\u61c9\u7528\u65bc\u5012\u983b\u8b5c\u5e8f\u5217 \u4e0a\uff0c\u6b64\u65b0\u65b9\u6cd5\u770b\u4f3c\u7c21\u6613\uff0c\u537b\u6709\u8a31\u591a\u5408\u7406\u7684\u539f\u56e0\u53ef\u986f\u793a\u65b0\u5e8f\u5217\u5305\u542b\u4e86\u8f03\u5c11\u7684\u5931\u771f\u6216\u5c0d\u65bc\u96dc \u8a0a\u66f4\u5177\u5f37\u5065\u6027\u3002\u8a31\u591a\u6539\u826f\u5012\u983b\u8b5c\u5e8f\u5217\u7684\u65b0\u7279\u5fb5\u5982 CMVN\u3001CHN \u548c MVA\uff0c\u9019\u4e9b\u65b9\u6cd5\u90fd\u80fd \u660e\u986f\u5730\u6539\u5584\u96dc\u8a0a\u5e36\u4f86\u4e0d\u5339\u914d\uff0c\u4f46\u662f\u9019\u4e9b\u65b9\u6cd5\u5728\u8655\u7406\u96dc\u8a0a\u6210\u5206\u8f03\u4f4e\u7684\u8a9e\u97f3\u7279\u5fb5\u5e8f\u5217\u6642\uff0c\u6703 \u5c0d\u8a9e\u97f3\u6210\u5206\u9020\u6210\u5931\u771f\u3002\u6240\u4ee5\u6211\u5011\u9664\u4e86\u63a2\u8a0e LPCF \u61c9\u7528\u65bc\u539f\u59cb\u7684 MFCC \u7279\u5fb5\u8fa8\u8b58\u6548\u80fd\u8b8a\u5316 \u4ee5\u5916\uff0c\u6211\u5011\u4e5f\u5c07\u4e0a\u8ff0\u7684\u9019\u5e7e\u7a2e\u65b9\u6cd5\u6240\u7522\u751f\u4e4b\u6297\u96dc\u8a0a\u80fd\u529b\u5f37\u7684\u7279\u5fb5\u518d\u7d93\u7531\u6211\u5011\u6240\u63d0\u51fa\u7684 LPCF\uff0c\u4e26\u4e14\u5f9e\u5be6\u9a57\u6578\u64da\u89c0\u5bdf\uff0c\u6211\u5011\u53ef\u4ee5\u5f97\u77e5\u4e0d\u8ad6\u662f\u54ea\u4e00\u7a2e\u7279\u5fb5\uff0c\u90fd\u80fd\u85c9\u7531 LPCF \u4f86\u63d0\u5347 \u8fa8\u8b58\u7387\uff0c\u4e26\u4e14\u4e5f\u53ef\u5f9e\u529f\u7387\u983b\u8b5c\u5bc6\u5ea6\u5716\u5f97\u77e5\u7dda\u6027\u4f30\u6e2c\u7de8\u78bc\u6ffe\u6ce2\u5668\u6cd5\u78ba\u5be6\u53ef\u6291\u5236\u7279\u5fb5\u5e8f\u5217\u7684 \u9ad8\u983b\u6210\u5206\uff0c\u5f37\u8abf\u4f4e\u983b\u6210\u5206\u3002 \u904e\u53bb\u7684\u7dda\u6027\u4f30\u6e2c\u7684\u6f14\u7b97\u6cd5\u591a\u7528\u65bc\u983b\u8b5c\u7684\u4f30\u6e2c\u4e0a\uff0c\u9019\u7a2e\u4f7f\u7528\u65b9\u6cd5\u901a\u5e38\u5fc5\u9808\u8981\u5c07\u7dda\u6027\u4f30 \u6e2c\u968e\u6578\u8abf\u6574\u70ba 8~12 \u968e\u624d\u80fd\u6709\u8f03\u597d\u7684\u8868\u73fe\uff0c\u4f46\u662f\u672c\u8ad6\u6587\u4e2d\u7dda\u6027\u4f30\u6e2c\u7528\u65bc\u5012\u983b\u8b5c\u5e8f\u5217\u4e0a\uff0c\u5247 \u63a1\u7528\u5f88\u5c0f\u7684\u968e\u6578\uff0c\u5c31\u53ef\u4ee5\u5f97\u5230\u6297\u96dc\u8a0a\u80fd\u529b\u8f03\u9ad8\u7684\u7279\u5fb5\u3002 see Fan, Hao-teng, 18(4): 115-132 I Isel, Fr\u00e9d\u00e9ric see Shen, Weilin, 18(3): 45-58 Surface AIR (US&EURP) AIR (ASIA) VOLUME no.92-01, no. 92-04(\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272\u8207 A Conceptual Structure for Parsing Mandarin --Its Frame and General Applications--US$ 9 US$ 19 US$15 _____ no.92-02 V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 & 92-03 V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 12 21 17 _____ 3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 1. 2. 8 13 11 _____ 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 18 30 24 _____ 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 10 15 13 _____ 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 10 15 13 _____ 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 5 10 8 _____ 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 18 30 24 _____ 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 11 16 14 _____ 1. no.92-01, no. 92-04 (\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272 \u8207 A conceptual Structure for Parsing Mandarin--its Frame and General Applications--NT$ 80 NT$ _____ _____ 2. no.92-02, no. 92-03 (\u5408\u8a02\u672c) V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 \u8207V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 120 _____ _____ 3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 120 _____ _____ 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 360 _____ _____ 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 180 _____ _____ 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 185 _____ _____ 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 40 _____ _____ 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 380 _____ _____ 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 180 _____ _____ 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 75 _____ _____ 4.3 LPCF \u968e\u6578 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 8 13 10 _____ 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 75 _____ _____ \u5728\u5be6\u969b\u61c9\u7528\u4e0a\uff0c\u904e\u53bb\u5f88\u591a\u6587\u737b\u88e1\uff0c\u7dda\u6027\u4f30\u6e2c\u7de8\u78bc\u5df2\u662f\u73fe\u4eca\u666e\u53ca\u7684\u6578\u4f4d\u97f3\u8a0a\u8655\u7406\u6280\u8853\uff0c \u5176\u4e3b\u8981\u512a\u9ede\u662f\u4f4e\u4f4d\u5143\u7387\u8207\u9ad8\u58d3\u7e2e\u7387\uff0c\u4f46\u5e38\u53ea\u662f\u4fb7\u9650\u65bc\u50b3\u9001\u8a9e\u97f3\u8a0a\u865f\uff0c\u800c\u6211\u5011\u6240\u63d0\u51fa\u7684 LPCF \u65b9\u6cd5\u53ef\u4ee5\u904b\u7528\u65bc\u8a9e\u97f3\u7279\u5fb5\u7684\u50b3\u8f38\u4e0a\uff0c\u85c9\u7531\u9069\u7576\u4e4b LPC \u968e\u6578\u7684\u9078\u64c7\uff0c\u4f7f\u5176\u8a9e\u97f3\u7279\u5fb5 D Das, Dipankar 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 3 8 6 _____ 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 75 _____ _____ 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 3 8 6 _____ 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 110 _____ _____ and Sivaji Bandyopadhyay. Emotion Co-referencing -Emotional Expression, 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 8 13 11 _____ 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 400 _____ _____ \u50b3\u905e\u6642\uff0c\u4e0d\u6703\u9020\u6210\u8fa8\u8b58\u6548\u679c\u7684\u964d\u4f4e\uff0c\u751a\u81f3\u53ef\u4ee5\u63d0\u5347\u8a9e\u97f3\u7279\u5fb5\u7684\u6297\u566a\u80fd\u529b\u3001\u4f7f\u50b3\u8f38\u7684\u8a9e\u97f3 Holder, and Topic; 18(1): 79-98 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 19 31 25 _____ 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 90 _____ _____ \u7279\u5fb5\uff0c\u540c\u6642\u5177\u5099\u50b3\u8f38\u6548\u7387\u8207\u5f37\u5065\u6548\u80fd\u7684\u512a\u9ede\u3002 \u5728\u672a\u4f86\u5c55\u671b\u4e2d\uff0c\u7531\u65bc\u6211\u5011\u6240\u63d0\u51fa\u7684 LPCF \u6cd5\u7684\u7f3a\u9ede\u4e4b\u4e00\uff0c\u5728\u65bc\u9700\u8981\u6574\u53e5\u8a9e\u97f3\u7684\u7279\u5fb5 E Esposito, Richard 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 9 14 12 _____ 16. no.98-01 \u8a5e\u983b\u8a5e\u5178 18 30 26 _____ 16 no.98-01 \u8a5e\u983b\u8a5e\u5178 395 _____ _____ 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 15 25 21 _____ 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 340 _____ _____ \u7686\u5df2\u63a5\u6536\u5230\u5f8c\uff0c\u624d\u80fd\u7cbe\u78ba\u5730\u4f30\u6e2c LPCF \u7684\u53c3\u6578\uff0c\u672a\u4f86\u6211\u5011\u5e0c\u671b\u91dd\u5c0d\u9019\u7f3a\u9ede\u52a0\u4ee5\u6539\u5584\uff0c\u53e6 see Yang, Li-chiung, 18(3): 21-44 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 4 9 7 _____ 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 90 _____ _____ \u5916\u6211\u5011\u5e0c\u671b\u66f4\u9032\u4e00\u6b65\u7684\u7814\u7a76\u6211\u5011\u6240\u63d0\u4e4b LPCF \u6cd5\u76f8\u95dc\u7684\u7406\u8ad6\u57fa\u790e\uff0c\u4e26\u4e14\u53ef\u4ee5\u5229\u7528\u52d5\u614b\u8abf \u9069\u7684\u65b9\u6cd5\u4f86\u6c42\u53d6 LPCF \u6cd5\u4e2d\u7684\u968e\u6578\uff0c\u7136\u800c\u63d0\u5347\u6b64\u6cd5\u7684\u6548\u80fd\uff0c\u6b64\u5916\uff0c\u6211\u5011\u4e5f\u5c07\u5ee3\u6cdb\u5730\u6e2c\u8a66 LPCF \u6cd5\uff0c\u4f7f\u5176\u80fd\u66f4\u9032\u4e00\u6b65\u904b\u7528\u65bc\u5176\u5b83\u5e72\u64fe\u8207\u5931\u771f\u74b0\u5883\u7684\u7279\u5fb5\u5f37\u5065\u6027\u4e4b\u6539\u5584\u4e0a\u3002 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 8 13 11 _____ 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 75 _____ _____ F Fan, Hao-teng Wen-yu Tseng, and Jeih-weih Hung. Employing 20. Computational Linguistics & Chinese Languages Processing (One year) (Back issues of IJCLCLP: US$ 20 per copy) ---100 100 _____ 20 \u8ad6\u6587\u96c6 COLING 2002 \u7d19\u672c 100 _____ _____ 21. Readings in Chinese Language Processing 25 25 21 _____ 21. \u8ad6\u6587\u96c6 COLING 2002 \u5149\u789f\u7247 300 _____ _____ Linear Prediction Coding in Feature Time Sequences for Robust Speech Recognition in 22. \u8ad6\u6587\u96c6 COLING 2002 Workshop \u5149\u789f\u7247 300 _____ _____AMOUNT _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____
LPCF \u968e\u6578 23. \u8ad6\u6587\u96c6 ISCSLP 2002 \u5149\u789f\u7247 2 \u8fa8\u8b58\u7387(%) \u4ea4\u8ac7\u7cfb\u7d71\u66a8\u8a9e\u5883\u5206\u6790\u7814\u8a0e\u6703\u8b1b\u7fa9 24. (\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u67031997\u7b2c\u56db\u5b63\u5b78\u8853\u6d3b\u52d5) 3 4 10% member discount: ___________Total Due:__________ 5 6 7 8 9 Noisy Environments; 18(4): 115-132 TOTAL _____ 300 _____ _____ 10 G 130 _____ __________
4. \u5be6\u9a57\u6578\u64da\u8207\u8a0e\u8ad6 \u672c\u7bc0\u5c07\u7531\u4e09\u90e8\u5206\u6240\u7d44\u6210\uff0c\u5728\u7b2c\u4e00\u8207\u7b2c\u4e8c\u90e8\u5206\uff0c\u6211\u5011\u56fa\u5b9a\u65b0\u63d0\u51fa\u7684 LPCF \u6cd5\u6240\u7528\u7684\u7dda\u6027\u4f30 LPCF \u968e\u6578 2 3 4 5 6 7 8 9 10 \u8fa8\u8b58\u7387(%) Gu, Hung-Yan and Jia-Wei Chang. Improving of Segmental LMR-Mapping Based Voice Conversion Method; 18(4): 94-114 H Hong, Jia-Fei and Chu-Ren Huang. Cross-Strait Lexical Differences: A Comparative Study based on \u2027 OVERSEAS USE ONLY \u4e2d\u6587\u8a08\u7b97\u8a9e\u8a00\u5b78\u671f\u520a (\u4e00\u5e74\u56db\u671f) \u5e74\u4efd\uff1a______ 25. (\u904e\u671f\u671f\u520a\u6bcf\u672c\u552e\u50f9500\u5143) ---2,500 _____ _____ \u2027 PAYMENT\uff1a \u25a1 Credit Card ( Preferred ) 26. Readings of Chinese Language Processing 675 _____ _____ 27. \u5256\u6790\u7b56\u7565\u8207\u6a5f\u5668\u7ffb\u8b6f 1990 150 _____ _____ \u5408 \u8a08 _____ _____ \u203b \u6b64\u50f9\u683c\u8868\u50c5\u9650\u570b\u5167 (\u53f0\u7063\u5730\u5340) \u4f7f\u7528 \u25a1 Name (please print): Signature: \u5283\u64a5\u5e33\u6236\uff1a\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 \u5283\u64a5\u5e33\u865f\uff1a19166251
\u6e2c\u968e\u6578\u70ba 2\uff0c\u5206\u5225\u4f5c\u7528\u65bc MFCC \u57fa\u790e\u7279\u5fb5\u3001\u8207\u7d93\u904e\u5404\u7a2e\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\u4e0a\uff0c \u63a2\u8a0e\u5176\u5c0d\u8fa8\u8b58\u7387\u7684\u6539\u9032\u7a0b\u5ea6\u3002\u7b2c\u4e09\u90e8\u5206\u5247\u662f\u8b8a\u5316 LPCF \u6cd5\u4e2d\u7684\u7dda\u6027\u4f30\u6e2c\u4fc2\u6578\uff0c\u89c0\u5bdf\u5176\u5c0d \u65bc\u8fa8\u8b58\u7387\u7684\u5f71\u97ff\u3002 LPCF \u968e\u6578 2 3 4 5 6 7 8 9 Chinese Gigaword Corpus; 18(2): 19-34 Hsieh, Shu-Kai Fax: \uf997\u7d61\u96fb\u8a71\uff1a(02) 2788-3799 \u8f491502 E-mail: \uf997\u7d61\u4eba\uff1a \u9ec3\u742a \u5c0f\u59d0\u3001\u4f55\u5a49\u5982 \u5c0f\u59d0 E-mail:aclclp@hp.iis.sinica.edu.tw 10 \u8fa8\u8b58\u7387(%) see Hsu, Chan-Chia, 18(2): 57-84 Hsu, Chan-Chia \u8a02\u8cfc\u8005\uff1a \u6536\u64da\u62ac\u982d\uff1a Address\uff1a \u5730 \u5740\uff1a and Shu-Kai Hsieh. Back to the Basic: Exploring Base Concepts from the Wordnet Glosses; \u96fb \u8a71\uff1a E-mail:
18(2): 57-84
", "html": null, "type_str": "table", "text": "\u8cc7\u6599\u5eab\u4e2d\u7684\u8a2d\u5b9a\uff0c\u6700\u7d42 MFCC \u7279\u5fb5\u5305\u542b\u4e86 13 \u7dad\u7684\u975c\u614b\u7279\u5fb5(static features)\u9644\u52a0 \u4e0a\u5176\u4e00\u968e\u5dee\u5206\u8207\u4e8c\u968e\u5dee\u5206\u7684\u52d5\u614b\u7279\u5fb5 (dynamic features) \uff0c\u5171 39 \u7dad\u7279\u5fb5\u3002\u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c \u672c\u8ad6\u6587\u4e4b\u5f8c\u6240\u63d0\u7684\u5f37\u5065\u6027\u6280\u8853\uff0c\u7686\u662f\u4f5c\u7528\u65bc 13 \u7dad\u7684\u975c\u614b\u7279\u5fb5\u4e0a\uff0c\u518d\u7531\u66f4\u65b0\u5f8c\u7684\u975c\u614b\u7279\u5fb5 \u6c42\u53d6 26 \u7dad\u7684\u52d5\u614b\u7279\u5fb5\u3002\u65b0\u63d0\u51fa\u7684 LPCF \u7684\u6f14\u7b97\u6cd5\uff0c\u85c9\u6b64\u5f97\u5230\u66f4\u4f73\u7684\u8fa8\u8b58\u7cbe\u78ba\u5ea6\u3002 \u5728\u8072\u5b78\u6a21\u578b\u4e0a\uff0c\u6211\u5011\u63a1\u53d6\u9023\u7e8c\u8a9e\u97f3\u8fa8\u8b58\u4e2d\u5e38\u898b\u7684\u96b1\u85cf\u5f0f\u99ac\u53ef\u592b\u6a21\u578b(hidden Markov model, HMM)\uff0c\u4e26\u63a1\u7528\u7531\u5de6\u5230\u53f3(left-to-right)\u5f62\u5f0f\u7684 HMM\uff0c\u610f\u5373\u4e0b\u4e00\u500b\u6642\u9593\u9ede\u6240\u5728 \u7684\u72c0\u614b\u53ea\u80fd\u505c\u7559\u5728\u7576\u4e0b\u7684\u72c0\u614b\u6216\u4e0b\u4e00\u500b\u9130\u8fd1\u7684\u72c0\u614b\uff0c\u72c0\u614b\u7684\u8b8a\u9077\u96a8\u8457\u6642\u9593\u7531\u5de6\u81f3\u53f3\u4f9d\u5e8f \u524d\u9032\u3002\u6b64\u5916\uff0c\u6a21\u578b\u4e2d\u7684\u72c0\u614b\u89c0\u6e2c\u6a5f\u7387\u51fd\u6578\u70ba\u9023\u7e8c\u5f0f\u9ad8\u65af\u6df7\u5408\u6a5f\u7387\u51fd\u6578 (Gaussian mixtures) \uff0c \u6240\u4ee5\u6b64\u6a21\u578b\u53c8\u7a31\u70ba\u9023\u7e8c\u5bc6\u5ea6\u96b1\u85cf\u5f0f\u99ac\u53ef\u592b\u6a21\u578b(continuous-density hidden Markov model, CDHMM)\u3002\u6211\u5011\u63a1\u7528\u4e86 HTK(HTK, n.d.)\u8edf\u9ad4\u4f86\u8a13\u7df4\u4e0a\u8ff0\u7684 HMM\uff0c\u5728\u6a21\u578b\u55ae\u4f4d\u7684\u9078 \u53d6\u4e0a\uff0c\u63a1\u7528\u524d\u5f8c\u6587\u7368\u7acb(context independent)\u7684\u6a21\u578b\u6a23\u5f0f\uff0c\u6240\u5f97\u4e4b\u8072\u5b78\u6a21\u578b\u5305\u542b\u4e86 11 \u500b\u6578\u5b57(oh, zero, one, \u2026, nine)\u8207\u975c\u97f3\u7684\u96b1\u85cf\u5f0f\u99ac\u53ef\u592b\u6a21\u578b\uff0c\u6bcf\u500b\u6578\u5b57\u7684 HMM \u7686\u5305\u542b \u4e86 16 \u500b\u72c0\u614b\uff0c\u800c\u6bcf\u500b\u72c0\u614b\u7531 3 \u500b\u9ad8\u65af\u6df7\u5408\u51fd\u6578\u7d44\u6210\u3002 \u5217\u51fa\u4e86\u7dda\u6027\u4f30\u6e2c\u968e\u6578\u70ba 2 \u4e4b LPCF \u6cd5\u4f5c\u7528\u65bc MFCC \u57fa\u790e\u7279\u5fb5\u6240\u5f97\u4e4b\u8fa8\u8b58\u7387\u3002\u5c07\u6b64\u8868 \u7684\u6578\u64da\u8207 MFCC \u57fa\u790e\u7279\u5fb5\u6240\u5f97\u7684\u8fa8\u8b58\u7387\u76f8\u6bd4\u8f03\uff0c\u6211\u5011\u770b\u5230\u5728 Set A \u8207 Set B \u9019\u5169\u7d44\u96dc\u8a0a \u74b0\u5883\u4e0b\uff0cLPCF \u6cd5\u53ef\u4ee5\u4f7f MFCC \u9054\u5230\u66f4\u4f73\u7684\u8fa8\u8b58\u7d50\u679c\uff0c\u5e73\u5747\u9032\u6b65\u7387\u7d04\u5728 4\uff05\uff0c\u53ef\u898b LPCF \u6cd5\u53ef\u4ee5\u63d0\u5347 MFCC \u7279\u5fb5\u5728\u52a0\u6210\u6027\u96dc\u8a0a\u5e72\u64fe\u4e0b\u7684\u5f37\u5065\u6027\uff0c\u7136\u800c\uff0c\u5728 Set C \u6b64\u540c\u6642\u5305\u542b\u901a\u9053 \u5931\u771f\u8207\u52a0\u6210\u6027\u96dc\u8a0a\u7684\u74b0\u5883\u4e0b\uff0cLPCF \u6cd5\u4e26\u672a\u5e36\u4f86\u5be6\u8cea\u7684\u9032\u6b65\uff0c\u51f8\u986f\u4e86\u6b64\u65b9\u6cd5\u8f03\u4e0d\u9069\u7528\u65bc \u901a\u9053\u5e72\u64fe\u4e0b\u7684\u8a9e\u97f3\u8fa8\u8b58\u3002 \u8868 1. \u539f\u59cb MFCC \u57fa\u790e\u7279\u5fb5\u8207\u5176\u7d93 LPCF \u6cd5(\u968e\u6578\u70ba MVA \u7686\u80fd\u660e\u986f\u63d0\u5347 MFCC \u4e4b\u96dc\u8a0a\u5f37\u5065 \u6027\u3001\u5f97\u5230\u8f03\u9ad8\u7684\u8fa8\u8b58\u7387\uff0c\u56e0\u6b64\uff0c\u9019\u88e1\u6211\u5011\u5c07 LPCF \u6cd5\u4f5c\u7528\u65bc\u7d93 CMVN\u3001CHN \u6216 MVA \u6cd5 \u9810\u8655\u7406\u5f8c\u7684 MFCC \u7279\u5fb5\u4e0a\uff0c\u89c0\u5bdf LPCF \u6cd5\u662f\u5426\u80fd\u5920\u4f7f\u5b83\u5011\u7684\u8fa8\u8b58\u7387\u9032\u4e00\u6b65\u63d0\u5347\uff0c\u503c\u5f97\u6ce8 \u610f\u7684\u662f\uff0c\u6b64\u6642 LPCF \u6cd5\u6240\u4f7f\u7528\u7684\u7dda\u6027\u4f30\u6e2c\u4fc2\u6578\u5fc5\u9808\u7531\u9810\u8655\u7406\u5f8c\u7684\u65b0\u7279\u5fb5\u6c42\u5f97\uff0c\u800c\u975e\u76f4\u63a5 \u63a1\u53d6\u539f\u59cb MFCC \u4e4b LPCF \u6cd5\u6240\u904b\u7528\u7684\u7dda\u6027\u4f30\u6e2c\u4fc2\u6578\u3002\u8ddf\u7b2c\u4e00\u90e8\u5206\u76f8\u540c\u7684\u662f\uff0c\u6b64\u6642 LPCF \u4e4b\u7dda\u6027\u4f30\u6e2c\u968e\u6578\u4ecd\u7136\u56fa\u5b9a\u70ba 2\u3002\u5176\u8fa8\u8b58\u7387\u5206\u5225\u5217\u65bc\u8868 2\u30013 \u8207 4\u3002\u5f9e\u9019\u4e09\u500b\u8868\u7684\u6578\u64da\uff0c\u6211 \u5011\u5f97\u5230\u4ee5\u4e0b\u7684\u89c0\u5bdf\u7d50\u679c\uff1a 1. \u7121\u8ad6\u662f\u4f5c\u7528\u54ea\u4e00\u7a2e\u65b9\u6cd5\u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\uff0cLPCF \u6cd5\u5728\u4e09\u7d44\u96dc\u8a0a\u74b0\u5883(Sets A, B, C)\u4e0b\u7686 \u80fd\u660e\u986f\u63d0\u5347\u5176\u5e73\u5747\u8fa8\u8b58\u7387\uff0c\u4f8b\u5982\u5c31\u6574\u9ad4\u5e73\u5747\u8fa8\u8b58\u7387\u800c\u8a00\uff0cLPCF \u6cd5\u80fd\u4f7f CMVN\u3001 CHN \u8207 MVA \u9810\u8655\u7406\u4e4b\u7279\u5fb5\u5206\u5225\u63d0\u5347\u4e86 3.38\uff05\u30012.2\uff05\u8207 0.87\uff05\uff0c\u6b64\u4ee3\u8868\u4e86 LPCF \u80fd\u8207\u9019\u4e9b\u8457 \u540d\u7684\u6642\u5e8f\u57df\u5f37\u5065\u6027\u6280\u8853\u6709\u826f\u597d\u7684\u52a0\u6210\u6027\uff0c\u63db\u8a00\u4e4b\uff0cLPCF \u53ef\u9032\u4e00\u6b65\u964d\u4f4e\u9019\u4e9b\u6280\u8853\u8655\u7406\u5f8c \u6b98\u9918\u7684\u96dc\u8a0a\u4e0d\u5339\u914d\u6210\u5206\uff0c\u9032\u800c\u5f97\u5230\u66f4\u4f73\u7684\u8fa8\u8b58\u7387\u3002 2. \u4e0d\u540c\u65bc\u4f5c\u7528\u65bc\u539f\u59cb MFCC \u4e4b\u7d50\u679c\uff0cLPCF \u4f5c\u7528\u65bc\u4e0a\u8ff0\u4e09\u7a2e\u9810\u8655\u7406\u6280\u8853\u5f8c\u7684\u7279\u5fb5\u6642\uff0c\u4e5f\u80fd \u4f7f Set C \u6b64\u7d44\u5305\u542b\u4e86\u901a\u9053\u5931\u771f\u8207\u52a0\u6210\u6027\u96dc\u8a0a\u7684\u8a9e\u97f3\uff0c\u8fa8\u8b58\u7387\u6709\u6240\u63d0\u5347\uff0c\u5176\u4e2d\u53ef\u80fd\u539f\u56e0\u5728 \u65bc\uff0c\u4e09\u7a2e\u9810\u8655\u7406\u6280\u8853\u6709\u6548\u964d\u4f4e\u901a\u9053\u6548\u61c9\u5f8c\uff0cLPCF \u6cd5\u63a5\u8457\u628a\u52a0\u6210\u6027\u96dc\u8a0a\u9020\u6210\u7684\u5931\u771f\u4f5c\u6709 \u6548\u7684\u6291\u5236\uff0c\u9032\u800c\u5728\u8fa8\u8b58\u4e0a\u6709\u8f03\u597d\u7684\u6548\u679c\u3002 3. \u5728\u4e09\u7a2e\u9810\u8655\u7406\u6280\u8853\u7684\u6bd4\u8f03\u4e0a\uff0cLPCF \u5c0d\u65bc MVA \u7279\u5fb5\u7684\u8fa8\u8b58\u7387\u63d0\u5347\uff0c\u660e\u986f\u8f03\u5176\u5c0d CMVN \u8207 CHN \u7279\u5fb5\u4f86\u7684\u5c0f\uff0c\u5176\u4e2d\u53ef\u80fd\u539f\u56e0\u662f\uff0cMVA \u6cd5\u4e2d\u5df2\u7d93\u7d50\u5408\u4e86\u4e00\u500b\u5f62\u5f0f\u70ba ARMA \u7684\u4f4e \u901a\u6ffe\u6ce2\u5668\uff0c\u4e4b\u5f8c\u518d\u7d50\u5408 LPCF \u4e4b\u985e\u4f3c\u7684\u6ffe\u6ce2\u8655\u7406\uff0c\u6539\u9032\u7684\u6548\u61c9\u8f03\u4e0d\u660e\u986f\u3002 \u8868 2. CMVN \u6cd5\u8207 MVN \u4e32\u806f LPCF \u6cd5(\u968e\u6578\u70ba 2) \uff0c\u5728\u4e0d\u540c\u7d44\u5225\u4e4b\u4e0b\u3001\u53d6 \u8b8a\u5316LPCF\u6cd5\u4e2d\u7dda\u6027\u4f30\u6e2c\u4e4b\u968e\u6578(order)\u7522\u751f\u7684\u6548\u61c9 \u5728\u7b2c\u4e00\u90e8\u5206\u8207\u7b2c\u4e8c\u90e8\u5206\u4e2d\uff0c\u6211\u5011\u4f7f\u7528\u7684 LPCF \u6cd5\uff0c\u5176\u4e2d\u7dda\u6027\u4f30\u6e2c\u7684\u968e\u6578\u56fa\u5b9a\u70ba 2\uff0c\u5f9e\u5176 \u5be6\u9a57\u7d50\u679c\uff0c\u6211\u5011\u89c0\u5bdf\u5230\u4f7f\u7528\u5f88\u4f4e\u7684\u968e\u6578\u5c31\u80fd\u4f7f LPCF \u6cd5\u767c\u63ee\u4e0d\u932f\u7684\u6548\u80fd\u3001\u6709\u6548\u6539\u5584 CMVN\u3001 CHN \u8207 MVA \u9810\u8655\u7406\u5f8c\u7684 MFCC \u8a9e\u97f3\u7279\u5fb5\u4e4b\u5f37\u5065\u6027\u3002\u5728\u9019\u4e00\u90e8\u5206\uff0c\u6211\u5011\u9032\u4e00\u6b65\u5c07 LPCF \u6cd5\u7684\u7dda\u6027\u4f30\u6e2c\u7684\u968e\u6578\u52a0\u4ee5\u8b8a\u5316\uff0c\u63a2\u8a0e\u6b64\u8b8a\u5316\u5c0d\u65bc\u8fa8\u8b58\u7387\u7522\u751f\u7684\u5f71\u97ff\u3002\u985e\u4f3c\u4e4b\u524d\u7684\u5169\u90e8\u5206\uff0c \u8b8a\u5316\u7dda\u6027\u4f30\u6e2c\u968e\u6578\u7684 LPCF \u6cd5\u6703\u5206\u5225\u4f5c\u7528\u65bc\u539f\u59cb MFCC \u7279\u5fb5\u3001\u4ee5\u53ca\u7d93 CMVN\u3001CHN \u6216 MVA \u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\u3002 \u9996\u5148\uff0c\u6211\u5011\u89c0\u5bdf\u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u4f5c\u7528\u65bc MFCC \u539f\u59cb\u7279\u5fb5\u7684\u6548\u61c9\uff0c\u6211\u5011\u5c07\u4f5c\u7528\u65bc MFCC \u539f\u59cb\u7279\u5fb5\u7684 LPCF \u6cd5\u5176\u968e\u6578\u5206\u5225\u8a2d\u70ba 2, 3, 4, \u2026, 10\uff0c\u9032\u800c\u57f7\u884c\u5c0d\u61c9\u7684\u8fa8\u8b58\u5be6\u9a57\u3002 \u8868 5 \u5217\u51fa\u4e86\u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u6240\u5f97\u7684\u8fa8\u8b58\u7387\u7d50\u679c\uff0c\u5f9e\u6b64\u8868\u7684\u6578\u64da\u4f86\u770b\uff0c\u7576\u968e\u6578\u70ba 3 \u6642\uff0c \u53ef\u5f97\u5230\u6700\u4f73\u7684\u8fa8\u8b58\u7387\uff0c\u4f46\u8207\u6211\u5011\u5148\u524d\u6240\u8a2d\u5b9a\u7684\u968e\u6578\u70ba 2 \u76f8\u8f03\uff0c\u8fa8\u8b58\u7387\u7684\u63d0\u5347\u4e0a\u4e26\u672a\u5341\u5206 \u660e\u986f\uff0c\u6211\u5011\u4e5f\u770b\u5230\uff0c\u589e\u52a0\u968e\u6578\u53cd\u800c\u4f7f\u8fa8\u8b58\u7387\u9010\u6f38\u4e0b\u964d\uff0c\u6b64\u73fe\u8c61\u7684\u53ef\u80fd\u539f\u56e0\u5728\u65bc\uff0c\u589e\u52a0\u968e \u6578\u4f7f LPCF \u6cd5\u5c0d\u61c9\u4e4b\u6ffe\u6ce2\u5668\u7684\u9577\u5ea6\u589e\u52a0\uff0c\u9032\u800c\u4f7f\u6ffe\u6ce2\u5668\u8f38\u51fa\u8a0a\u865f\u7684\u66ab\u614b\u97ff\u61c9(transient response)\u8b8a\u9577\uff0c\u76f8\u5c0d\u65bc\u8f38\u5165\u8a0a\u865f\u6709\u66f4\u9577\u7684\u5ef6\u9072(delay)\uff0c\u5728\u8981\u6c42\u8f38\u5165\u8a0a\u865f(\u5373\u539f\u59cb\u7279 \u5fb5\u6642\u9593\u5e8f\u5217)\u8207\u8f38\u51fa\u8a0a\u865f(\u5373\u85c9\u7531 LPCF \u66f4\u65b0\u5f8c\u7684\u7279\u5fb5\u5e8f\u5217)\u4e8c\u8005\u9577\u5ea6\u4e00\u81f4\u7684\u524d\u63d0\u4e0b\uff0c \u9054\u5230\u7a69\u614b (steady state) \u7684\u8f38\u51fa\u8a0a\u865f\u7684\u5340\u57df\u660e\u986f\u8b8a\u77ed\uff0c\u800c\u7121\u6cd5\u767c\u63ee LPCF \u6cd5\u6240\u9810\u671f\u7684\u6548\u679c\u3002 \u56e0\u6b64\u7e3d\u7d50\u800c\u8ad6\uff0c\u7576\u4f5c\u7528\u65bc\u539f\u59cb MFCC \u6642\uff0c\u8b8a\u5316 LPCF \u4e4b\u968e\u6578\u4e26\u672a\u80fd\u5c0d\u65bc\u9810\u671f\u7684\u5f37\u5065\u529f\u80fd \u6709\u660e\u986f\u7684\u6539\u5584\u3002 \u8868 5. \u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u4f5c\u7528\u65bc MFCC \u57fa\u790e\u7279\u5fb5\uff0c\u5728 10 \u985e\u96dc\u8a0a\u74b0\u5883\u8207 5 \u7a2e\u8a0a\u96dc \u6bd4\u4e4b\u7e3d\u5e73\u5747\u8fa8\u8b58\u7387(%)\u6bd4\u8f03 63.63 63.74 63.47 63.08 62.81 62.68 62.48 62.02 61.94 \u63a5\u8457\uff0c\u6211\u5011\u628a\u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u4f5c\u7528\u65bc\u7d93 MVN\u3001CHN \u6216 MVA \u9810\u8655\u7406\u5f8c\u7684\u7279\u5fb5\uff0c \u9032\u800c\u57f7\u884c\u5c0d\u61c9\u7684\u8fa8\u8b58\u5be6\u9a57\u3002\u8868 6\u30017 \u8207 8 \u5217\u51fa\u4e86\u6240\u5f97\u7684\u8fa8\u8b58\u7387\u7d50\u679c\uff0c\u5f9e\u9019\u4e09\u500b\u8868\u7684\u6578\u64da\uff0c \u4e26\u8207\u8868 2 \u81f3 4 \u7684\u6578\u64da\u6bd4\u8f03\uff0c\u6211\u5011\u6709\u4ee5\u4e0b\u7684\u767c\u73fe\u53ca\u8a0e\u8ad6\uff1a 1. \u5c31 CMVN \u9810\u8655\u7406\u4e4b\u7279\u5fb5\u800c\u8a00\uff0c\u6700\u4f73\u7684 LPCF \u4e4b\u968e\u6578\u70ba 3\uff0c\u76f8\u8f03\u65bc\u4e0d\u4f5c LPCF \u7684\u7d50\u679c\uff0c \u7e3d\u8fa8\u8b58\u7387\u53ef\u63d0\u5347 4.19%\uff0c\u800c\u8ddf\u539f\u59cb\u968e\u6578\u70ba 2 \u7684\u6578\u64da\u76f8\u8f03\uff0c\u968e\u6578\u8a2d\u70ba 3 \u53ef\u4f7f\u5e73\u5747\u8fa8\u8b58\u7387\u63d0 \u5347\u7d04 1%\u3002\u800c LPCF \u6cd5\u4f7f\u7528\u8d85\u904e 3 \u7684\u968e\u6578\u6642\uff0c\u5c0d\u61c9\u4e4b\u8fa8\u8b58\u7387\u4e26\u7121\u986f\u8457\u7684\u4e0b\u964d\uff0c\u5728\u6240\u9078\u5b9a \u4e4b\u968e\u6578\u7bc4\u570d 2 \u81f3 10 \u4e4b\u4e2d\uff0cCMVN \u6cd5\u8207 LPCF \u6cd5\u7684\u7d44\u5408\u7686\u6bd4\u55ae\u4e00 CMVN \u6cd5\u5f97\u5230\u66f4\u4f73\u7684 \u8fa8\u8b58\u6548\u80fd\u3002 2. \u5c31 CHN \u9810\u8655\u7406\u4e4b\u7279\u5fb5\u800c\u8a00\uff0c\u6700\u4f73\u7684 LPCF \u4e4b\u968e\u6578\u70ba 4\uff0c\u76f8\u8f03\u65bc\u4e0d\u4f5c LPCF \u7684\u7d50\u679c\uff0c\u7e3d \u8fa8\u8b58\u7387\u53ef\u63d0\u5347 2.65%\uff0c\u800c\u8ddf\u539f\u59cb\u968e\u6578\u70ba 2 \u7684\u6578\u64da\u76f8\u8f03\uff0c\u968e\u6578\u8a2d\u70ba 4 \u53ef\u4f7f\u5e73\u5747\u8fa8\u8b58\u7387\u63d0\u5347 \u7d04 0.5%\u3002\u5176\u4ed6\u7d50\u679c\u8207\u4e0a\u4e00\u9ede\u95dc\u65bc CMVN \u8207 LPCF \u4e4b\u7d44\u5408\u7684\u7d50\u679c\u5f88\u985e\u4f3c\u3002\u8db3\u898b\u7c21\u6613\u7684 LPCF \u6cd5(\u968e\u6578\u8f03\u5c11)\u5728\u8207 CHN \u6cd5\u7d50\u5408\u6642\uff0c\u5c31\u6709\u8fd1\u4e4e\u6700\u4f73\u7684\u6548\u80fd\u8868\u73fe\u3002 3. \u5c31 MVA \u9810\u8655\u7406\u4e4b\u7279\u5fb5\u800c\u8a00\uff0c\u6700\u4f73\u7684 LPCF \u4e4b\u968e\u6578\u70ba 10\uff0c\u6b64\u73fe\u8c61\u8207\u524d\u5169\u9ede\u6240\u8ff0\u4e4b CMVN \u53ca CHN \u6cd5\u8f03\u4e0d\u540c\uff0c\u4f46\u4ed4\u7d30\u89c0\u5bdf\uff0c\u53ef\u770b\u51fa\u7576\u8207 MVA \u6cd5\u7d50\u5408\u6642\uff0c\u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u6240 \u5f97\u5230\u7684\u8fa8\u8b58\u7387\u5341\u5206\u63a5\u8fd1\uff0c\u6700\u4f73\u5e73\u5747\u503c\u8207\u8ddf\u539f\u59cb\u968e\u6578\u70ba 2 \u7684\u6578\u64da\u76f8\u8f03\uff0c\u4e5f\u53ea\u63d0\u5347\u4e86 0.03% \u81f3 0.13%\uff0c\u8db3\u898b\u6b64\u6642 LPCF \u7684\u968e\u6578\u5c0d\u5176\u6548\u80fd\u5f71\u97ff\u751a\u5fae\u3002\u5982\u540c\u524d\u9762\u7684\u8a0e\u8ad6\uff0cLPCF \u6cd5\u8207 MVA \u6cd5\u7684\u52a0\u6210\u6027\u8f03\u4f4e\uff0c\u4f46\u662f\u4e8c\u8005\u7d50\u5408\u4ecd\u6bd4\u55ae\u4e00 MVA \u6cd5\u5728\u63d0\u5347 MFCC \u4e4b\u8fa8\u8b58\u7387\u7684\u8868\u73fe \u4e0a\u4f86\u7684\u597d\u3002 \u8868 6. \u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u4f5c\u7528\u65bc CMVN \u9810\u8655\u7406\u7279\u5fb5\uff0c\u5728 10 \u985e\u96dc\u8a0a\u74b0\u5883\u8207 5 \u7a2e\u8a0a \u96dc\u6bd4\u4e4b\u7e3d\u5e73\u5747\u8fa8\u8b58\u7387(%)\u6bd4\u8f03 77.93 78.74 78.28 78.11 77.86 77.80 77.80 77.75 77.82 \u8868 7. \u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u4f5c\u7528\u65bc CHN \u9810\u8655\u7406\u7279\u5fb5\uff0c\u5728 10 \u985e\u96dc\u8a0a\u74b0\u5883\u8207 5 \u7a2e\u8a0a\u96dc \u6bd4\u4e4b\u7e3d\u5e73\u5747\u8fa8\u8b58\u7387(%)\u6bd4\u8f03 84.41 84.82 84.86 84.81 84.71 84.66 84.58 84.59 84.63 \u8868 8. \u4e0d\u540c\u968e\u6578\u4e4b LPCF \u6cd5\u4f5c\u7528\u65bc MVA \u9810\u8655\u7406\u7279\u5fb5\uff0c\u5728 10 \u985e\u96dc\u8a0a\u74b0\u5883\u8207 5 \u7a2e\u8a0a\u96dc \u6bd4\u4e4b\u7e3d\u5e73\u5747\u8fa8\u8b58\u7387(%)\u6bd4\u8f03 79.62 79.25 79.51 79.59 79.69 79.58 79.72 79.71 79.73 Money Order or Check payable to \"The Association for Computation Linguistics and Chinese Language Processing \" or \"\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703\" \u2027 E-mail\uff1aaclclp@hp.iis.sinica.edu.tw" } } } }