{ "paper_id": "O15-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:10:03.146169Z" }, "title": "", "authors": [ { "first": "Hao-Chun", "middle": [], "last": "\u6731\u7693\u99ff", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yuan Ze University", "location": {} }, "email": "" }, { "first": "Jung-Hsi", "middle": [], "last": "Chu\uff0c", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yuan Ze University", "location": {} }, "email": "" }, { "first": "\u65b9\u58eb\u8c6a", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yuan Ze University", "location": {} }, "email": "" }, { "first": "Fang\uff0c", "middle": [], "last": "Shih-Hau", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yuan Ze University", "location": {} }, "email": "" }, { "first": "Chih-Min", "middle": [], "last": "\u6797\u5fd7\u6c11", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yuan Ze University", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yuan Ze University", "location": {} }, "email": "" }, { "first": "Yun-Fan", "middle": [ "Chang\uff0c" ], "last": "\u5f35\u96f2\u5e06", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yu", "middle": [], "last": "\u66f9\u6631", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Tsao", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Traditionally, cerebellar model articulation controller (CMAC) is used in motor control, inverted pendulum robot, and nonlinear channel equalization. In this study, we investigate the capability of CMAC for speech enhancement. We construct a CMAC-based supervised speech enhancement system, which includes offline and online phases. In the offline phase, a paired noisy-clean speech dataset is prepared and used to train the parameters in a CMAC model. In the online phase, the trained CMAC model transforms the input noisy speech signals to enhanced speech signals with reduced noise components. To test the CMAC-based speech enhancement system, this study adopted three speech objective evaluation metrics, including perceptual evaluation of speech quality (PESQ), segmental signal-to-noise ratio (SSNR) and speech distortion index (SDI). A well-known traditional speech enhancement approach, minimum mean-square-error (MMSE) algorithm, was also tested performance for comparison. Experimental results demonstrated that CMAC provides superior performances to the MMSE method for all of the three objective evaluation metrics.", "pdf_parse": { "paper_id": "O15-1013", "_pdf_hash": "", "abstract": [ { "text": "Traditionally, cerebellar model articulation controller (CMAC) is used in motor control, inverted pendulum robot, and nonlinear channel equalization. In this study, we investigate the capability of CMAC for speech enhancement. We construct a CMAC-based supervised speech enhancement system, which includes offline and online phases. In the offline phase, a paired noisy-clean speech dataset is prepared and used to train the parameters in a CMAC model. In the online phase, the trained CMAC model transforms the input noisy speech signals to enhanced speech signals with reduced noise components. To test the CMAC-based speech enhancement system, this study adopted three speech objective evaluation metrics, including perceptual evaluation of speech quality (PESQ), segmental signal-to-noise ratio (SSNR) and speech distortion index (SDI). A well-known traditional speech enhancement approach, minimum mean-square-error (MMSE) algorithm, was also tested performance for comparison. Experimental results demonstrated that CMAC provides superior performances to the MMSE method for all of the three objective evaluation metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "\u8a9e\u97f3\u8a0a\u865f\u6703\u7531\u65bc\u80cc\u666f\u96dc\u97f3\u9020\u6210\u8a9e\u97f3\u54c1\u8cea\u964d\u4f4e\uff0c\u8a9e\u97f3\u589e\u5f37\u7cfb\u7d71(Speech Enhancement System) \u4e3b\u8981\u76ee\u7684\u662f\u6e1b\u5c11\u96dc\u97f3\u6210\u5206\uff0c\u5f9e\u800c\u63d0\u9ad8\u8a0a\u96dc\u6bd4(SNR)\u3002\u5f9e\u5435\u96dc\u8a9e\u97f3\u4e2d\u4f30\u8a08\u51fa\u4e7e\u6de8\u8a9e\u97f3\u662f\u8a31 \u591a\u5be6\u969b\u61c9\u7528\u4e2d\u975e\u5e38\u91cd\u8981\u7684\u8a9e\u97f3\u6280\u8853\uff0c\u5982\u81ea\u52d5\u8a9e\u97f3\u8b58\u5225(Automatic Speech Recognition, ASR)\u548c\u52a9\u807d\u5668(Hearing Aids) [1, 2] \u7b49\u61c9\u7528\u3002\u8a9e\u97f3\u589e\u5f37\u7b97\u6cd5\u5927\u81f4\u5206\u70ba\u5169\u985e\uff0c\u5373\u975e\u76e3\u7763 (Unsupervised)\u548c\u76e3\u7763(Supervised)\u7b97\u6cd5\uff0c\u975e\u76e3\u7763\u8a9e\u97f3\u589e\u5f37\u7b97\u6cd5\u512a\u9ede\u5728\u65bc\u9700\u8981\u5f88\u5c11\u751a\u81f3\u4e0d \u9700\u8981\u4e8b\u5148\u6e96\u5099\u6578\u64da\uff0c\u4e00\u500b\u597d\u7684\u975e\u76e3\u7763\u8a9e\u97f3\u589e\u5f37\u7b97\u6cd5\u662f\u5229\u7528\u983b\u8b5c\u6062\u5fa9 [3] \uff0c\u983b\u8b5c\u6062\u5fa9\u65b9\u6cd5 \u7684\u76ee\u6a19\u662f\u5728\u983b\u57df\u4e2d\u4f30\u8a08\u51fa\u589e\u76ca\u51fd\u6578\uff0c\u4ee5\u7528\u4f86\u964d\u4f4e\u96dc\u97f3\uff0c\u983b\u8b5c\u6062\u5fa9\u7684\u65b9\u6cd5\u5305\u62ec\u8b5c\u6e1b\u6cd5 (Spectral Subtraction, SS) [4] \u548c\u6eab\u5c3c\u6ffe\u6ce2\u5668(Wiener Filtering) [5] \uff0c\u8207\u4ed6\u5011\u7684\u5404\u7a2e\u5ef6\u4f38 [6] [7] [8] [9] \u3002 \u6b64\u5916\uff0c\u53e6\u4e00\u4e9b\u983b\u8b5c\u6062\u5fa9\u7684\u65b9\u6cd5\u662f\u63a8\u5c0e\u51fa\u8a9e\u97f3\u8a0a\u865f\u548c\u5e36\u96dc\u97f3\u8a0a\u865f\u7684\u6982\u7387\u6a21\u578b(Probabilistic Models)\uff0c\u6210\u529f\u7684\u4f8b\u5b50\u5305\u62ec\u6700\u5c0f\u5747\u65b9\u8aa4\u5dee(MMSE)\u983b\u8b5c\u4f30\u8a08 [10] [11] [12] [13] [14] \u3001\u6700\u5927\u4e8b\u5f8c\u983b\u8b5c\u632f\u5e45 (Maximum A Posteriori Spectral Amplitude, MAPA)\u4f30\u8a08\u5668 [15] [16] [17] [18] \u548c\u6700\u5927\u53ef\u80fd\u983b\u8b5c\u632f\u5e45 (Maximum Likelihood Spectral Amplitude, MLSA)\u4f30\u8a08\u5668 [19, 20] \u7b49\u3002\u76ee\u7684\u662f\u7528\u96dc\u8a0a\u8ffd\u8e64 \u6cd5(Noise Tracking)\u4f30\u8a08\u51fa\u96dc\u8a0a\u7684\u529f\u7387\u983b\u8b5c\uff0c\u5e38\u898b\u7684\u96dc\u8a0a\u8ffd\u8e64\u6cd5\u5982\u8a9e\u97f3\u6d3b\u52d5\u6aa2\u6e2c(Voice Activity Detection, VAD)\u3001\u6700\u5c0f\u7d71\u8a08\u6cd5(Minimum Statistic, MS) [21, 22] \u7b49\u3002\u5f97\u5230\u96dc\u8a0a\u529f\u7387 \u983b\u8b5c\u5f8c\uff0c\u5373\u53ef\u5f97\u5230\u4e8b\u524d\u8a0a\u96dc\u6bd4(a priori SNR)\u8207\u4e8b\u5f8c\u8a0a\u96dc\u6bd4(a posteriori SNR)\uff0c\u6839\u64da\u9019\u5169 \u7a2e\u8a0a\u96dc\u6bd4\u53ef\u4ee5\u7b97\u51fa\u589e\u76ca\u51fd\u6578(Gain Function)\uff0c\u5229\u7528\u6b64\u589e\u76ca\u51fd\u6578\u505a\u8a9e\u97f3\u589e\u5f37\uff0c\u5373\u53ef\u4f30\u8a08\u51fa \u4e7e\u6de8\u8a9e\u97f3\u8a0a\u865f\u983b\u8b5c\u3002\u800c\u76e3\u7763\u8a9e\u97f3\u589e\u5f37\u7b97\u6cd5\u9700\u8981\u4e8b\u5148\u6df7\u5408\u96dc\u97f3\u548c\u4e7e\u6de8\u8a9e\u6599\uff0c\u4ee5\u4fbf\u8655\u7406\u5728\u7dda (Online)\u8a9e\u97f3\u589e\u5f37\uff0c\u6210\u529f\u7684\u4f8b\u5b50\u5305\u62ec Deep Neural Network(DNN) [23] \u3001Deep Denoising Autoencoder(DDAE) [24] \u3001Sparse Coding [25] ", "cite_spans": [ { "start": 172, "end": 175, "text": "[1,", "ref_id": "BIBREF0" }, { "start": 176, "end": 178, "text": "2]", "ref_id": "BIBREF1" }, { "start": 284, "end": 287, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 360, "end": 363, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 389, "end": 392, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 403, "end": 406, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 407, "end": 410, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 411, "end": 414, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 415, "end": 418, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 500, "end": 504, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 505, "end": 509, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 510, "end": 514, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 515, "end": 519, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 520, "end": 524, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 586, "end": 590, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 591, "end": 595, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 596, "end": 600, "text": "[17]", "ref_id": "BIBREF16" }, { "start": 601, "end": 605, "text": "[18]", "ref_id": "BIBREF17" }, { "start": 665, "end": 669, "text": "[19,", "ref_id": "BIBREF18" }, { "start": 670, "end": 673, "text": "20]", "ref_id": "BIBREF19" }, { "start": 789, "end": 793, "text": "[21,", "ref_id": "BIBREF20" }, { "start": 794, "end": 797, "text": "22]", "ref_id": "BIBREF21" }, { "start": 996, "end": 1000, "text": "[23]", "ref_id": "BIBREF22" }, { "start": 1035, "end": 1039, "text": "[24]", "ref_id": "BIBREF23" }, { "start": 1055, "end": 1059, "text": "[25]", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "\u4e00\u3001\u7c21\u4ecb", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Output x N x 2 x 1 \u22ee \u22ee \u22ee \u22ee \u22ee y b j j 2. \u806f\u60f3\u8a18\u61b6\u7a7a\u9593(Association memory space)\uff1a\u591a\u500b\u5143\u7d20(Elements)\u53ef\u4ee5\u7d2f\u7a4d\u70ba\u4e00\u500b\u584a (N B )\uff0cN B = ceil(N e /Layer)\uff0cceil \u4ee3\u8868\u9918\u6578\u7121\u689d\u4ef6\u9032\u4f4d\uff0c\u901a\u5e38N B \u2265 2\uff0c\u5728\u5716\u4e00(B)\u4e0a N B = 8 (A, B, C, D, E, F, G, H)\u3002N A \u8868\u793a\u806f\u60f3\u8a18\u61b6\u7a7a\u9593\u500b\u6578(N A = N \u00d7 N B )\u3002\u5728\u6bcf\u500b \u584a(N B )\u7a7a\u9593\u4e2d\uff0c\u9700\u8981\u653e\u5165\u4e00\u500b\u9023\u7e8c\u6709\u754c\u51fd\u6578\uff0c\u5b83\u53ef\u4ee5\u5b9a\u7fa9\u70ba\u4e09\u89d2\u5f62\u51fd\u6578\u6216\u5c0f\u6ce2\u51fd\u6578 \u6216\u4efb\u610f\u9023\u7e8c\u6709\u754c\u51fd\u6578\uff0c\u5728\u9019\u88e1\u806f\u60f3\u8a18\u61b6\u51fd\u6578\u662f\u63a1\u7528\u9ad8\u65af\u51fd\u6578\uff0c\u5b83\u53ef\u4ee5\u8868\u793a\u70ba(1)\u5f0f \u03c6 ij = exp [ \u2212(x i \u2212 m ij ) 2 \u03c3 ij 2 ] for j = 1,2, \u22ef , N B and i = 1,2, \u22ef , N", "eq_num": "(1)" } ], "section": "\u4e00\u3001\u7c21\u4ecb", "sec_num": null }, { "text": "\u5176\u4e2dm ij \u548c\u03c3 ij \u5206\u5225\u70ba\u806f\u60f3\u8a18\u61b6\u51fd\u6578\u5167\u7b2c i \u500b\u8f38\u5165\u7684\u7b2c j \u500b\u584a\u7684\u5e73\u5747\u503c\u53ca\u8b8a\u7570\u6578\uff0cc i \u662f \u8f38\u5165\u8a0a\u865f\u3002 3. \u63a5\u53d7\u57df\u7a7a\u9593(Receptive-field space)\uff1a\u591a\u500b\u806f\u60f3\u8a18\u61b6\u7a7a\u9593\u53ef\u4ee5\u7d44\u6210\u4e00\u500b\u63a5\u53d7\u57df\u7a7a\u9593\uff0c \u5728\u672c\u6587\u4e2dN B = N R \uff0c\u5982\u5716\u4e00(B)\u662f\u7531\u5169\u500b\u806f\u60f3\u8a18\u61b6\u7a7a\u9593\u5167\u76f8\u5c0d\u61c9\u7684\u5169\u500b\u584a(N B )\u7d44\u6210\u4e00 \u500b\u63a5\u53d7\u57df(N R )\uff0c\u5982 A \u584a\u548c a \u584a\u7d44\u6210\u4e00\u500b\u63a5\u53d7\u57df(Aa)\u3002\u7b2c j \u500b\u63a5\u53d7\u57df\u51fd\u6578\u8868\u793a\u70ba(2) \u5f0f b j = \u220f \u03c6 ij = exp [\u2212 ( \u2211 (x i \u2212 m ij ) 2 \u03c3 ij 2 N i=1 ) ] N i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u4e00\u3001\u7c21\u4ecb", "sec_num": null }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u4e00\u3001\u7c21\u4ecb", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u63a5\u53d7\u57df\u51fd\u6578\u53ef\u4ee5\u7528\u5411\u91cf\u7684\u5f62\u5f0f\u8868\u793a\uff0c\u5982(3)\u5f0f b = [b 1 , b 2 , \u22ef , b N R ] T", "eq_num": "(3)" } ], "section": "\u4e00\u3001\u7c21\u4ecb", "sec_num": null }, { "text": "4. \u6b0a\u91cd\u5132\u5b58\u7a7a\u9593(Weight memory space)\uff1a\u5728\u63a5\u53d7\u57df\u7a7a\u9593\u4e2d\u7684\u6bcf\u500b\u4f4d\u7f6e\u7684\u6b0a\u91cd\u8abf\u7bc0\u503c\u53ef ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u4e00\u3001\u7c21\u4ecb", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u8868\u793a\u70ba(4)\u5f0f = [ 1 , 2 , \u22ef , N R ] T (4) 5. \u8f38\u51fa\u7a7a\u9593(Output space)\uff1aCMAC \u7684\u8f38\u51fa\u662f(3)\u5f0f(4)\u5f0f\u5167\u7684\u6bcf\u500b\u503c\u76f8\u4e58\uff0c\u6700\u5f8c\u52a0\u7e3d\u8d77 \u4f86\uff0c\u4e26\u8868\u793a\u70ba(5)\u5f0f y = T b = \u2211 j b j N R j=1", "eq_num": "(5)" } ], "section": "\u4e00\u3001\u7c21\u4ecb", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Compression for Clinicians", "authors": [ { "first": "T", "middle": [], "last": "Venema", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Venema, Compression for Clinicians, Thomson Delmar Learning, 2006, Chapter 7.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Noise reduction in hearing aids: An overview", "authors": [ { "first": "H", "middle": [], "last": "Levitt", "suffix": "" } ], "year": 2001, "venue": "Journal of Rehabilitation Research and Development", "volume": "38", "issue": "", "pages": "111--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Levitt, \"Noise reduction in hearing aids: An overview,\" Journal of Rehabilitation Research and Development, vol. 38, pp. 111-121, 2001.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fundamentals of Noise Reduction", "authors": [ { "first": "J", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Chen, Fundamentals of Noise Reduction, Springer Handbook of Speech Processing, 2008, Chapter 43.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Suppression of acoustic noise in speech using spectral subtraction", "authors": [ { "first": "S", "middle": [], "last": "Boll", "suffix": "" } ], "year": 1979, "venue": "Acoustics, Speech and Signal Processing", "volume": "27", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Boll, \"Suppression of acoustic noise in speech using spectral subtraction,\" IEEE Transactions, Acoustics, Speech and Signal Processing, vol. 27, pp. 113-120, 1979.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Speech enhancement based on a priori signal to noise estimation", "authors": [ { "first": "P", "middle": [], "last": "Scalart", "suffix": "" }, { "first": "J", "middle": [ "V" ], "last": "Filho", "suffix": "" } ], "year": 1996, "venue": "Proceedings ICASSP", "volume": "", "issue": "", "pages": "629--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Scalart and J. V. Filho, \"Speech enhancement based on a priori signal to noise estimation,\" Proceedings ICASSP, pp. 629-632, 1996.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A geometric approach to spectral subtraction", "authors": [ { "first": "Y", "middle": [], "last": "Lu", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Loizou", "suffix": "" } ], "year": 2008, "venue": "Speech Communication", "volume": "50", "issue": "", "pages": "453--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Lu and P. C. Loizou, \"A geometric approach to spectral subtraction,\" ELSEVIER, Speech Communication, vol. 50, pp. 453-466, 2008.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptive \u03b2-order generalized spectral subtraction for speech enhancement", "authors": [ { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "S", "middle": [], "last": "Sakamoto", "suffix": "" }, { "first": "S", "middle": [], "last": "Hongo", "suffix": "" }, { "first": "M", "middle": [], "last": "Akagi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 2008, "venue": "Signal Processing", "volume": "88", "issue": "", "pages": "2764--2776", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Li, S. Sakamoto, S. Hongo, M. Akagi and Y. Suzuki, \"Adaptive \u03b2-order generalized spectral subtraction for speech enhancement,\" ELSEVIER, Signal Processing, vol. 88, pp. 2764-2776, 2008.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A signal subspace approach for speech enhancement", "authors": [ { "first": "Y", "middle": [], "last": "Ephraim", "suffix": "" }, { "first": "H", "middle": [ "L V" ], "last": "Trees", "suffix": "" } ], "year": 1995, "venue": "IEEE Transactions, Speech and Audio Processing", "volume": "3", "issue": "", "pages": "251--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Ephraim and H. L. V. Trees, \"A signal subspace approach for speech enhancement,\" IEEE Transactions, Speech and Audio Processing, vol. 3, pp. 251-266, 1995.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Signal/noise KLT based approach for enhancing speech degraded by colored noise", "authors": [ { "first": "U", "middle": [], "last": "Mittal", "suffix": "" }, { "first": "N", "middle": [], "last": "Phamdo", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions, Speech and Audio Processing", "volume": "8", "issue": "", "pages": "159--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "U. Mittal and N. Phamdo, \"Signal/noise KLT based approach for enhancing speech degraded by colored noise,\" IEEE Transactions, Speech and Audio Processing, vol. 8, pp. 159-167, 2000.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator", "authors": [ { "first": "Y", "middle": [], "last": "Ephraim", "suffix": "" }, { "first": "D", "middle": [], "last": "Malah", "suffix": "" } ], "year": 1984, "venue": "Acoustics, Speech and Signal Processing", "volume": "32", "issue": "", "pages": "1109--1121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Ephraim and D. Malah, \"Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator,\" IEEE Transactions, Acoustics, Speech and Signal Processing, vol. 32, pp. 1109-1121, 1984.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improved noise suppression filter using self-adaptive estimator of probability of speech absence", "authors": [ { "first": "I", "middle": [ "Y" ], "last": "Soon", "suffix": "" }, { "first": "S", "middle": [ "N" ], "last": "Koh", "suffix": "" }, { "first": "C", "middle": [ "K" ], "last": "Yeo", "suffix": "" } ], "year": 1999, "venue": "Signal Processing", "volume": "75", "issue": "", "pages": "151--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Y. Soon, S. N. Koh and C. K. Yeo, \"Improved noise suppression filter using self-adaptive estimator of probability of speech absence,\" ELSEVIER, Signal Processing, vol. 75, pp. 151-159, 1999.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Speech enhancement based on minimum mean-square error estimation and supergaussian priors", "authors": [ { "first": "R", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions, Speech and Audio Processing", "volume": "13", "issue": "", "pages": "845--856", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Martin, \"Speech enhancement based on minimum mean-square error estimation and supergaussian priors,\" IEEE Transactions, Speech and Audio Processing, vol. 13, pp. 845-856, 2005.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Speech enhancement based on generalized minimum mean square error estimators and masking properties of the auditory system", "authors": [ { "first": "J", "middle": [ "H L" ], "last": "Hansen", "suffix": "" }, { "first": "V", "middle": [], "last": "Radhakrishnan", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Arehart", "suffix": "" } ], "year": 2006, "venue": "Speech, and Language Processing", "volume": "14", "issue": "", "pages": "2049--2063", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. H. L. Hansen, V. Radhakrishnan and K. H. Arehart, \"Speech enhancement based on generalized minimum mean square error estimators and masking properties of the auditory system,\" IEEE Transactions, Audio, Speech, and Language Processing, vol. 14, pp. 2049-2063, 2006.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Tracking speech-presence uncertainty to improve speech enhancement non-stationary noise environments", "authors": [ { "first": "D", "middle": [], "last": "Malah", "suffix": "" }, { "first": "R", "middle": [ "V" ], "last": "Cox", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Accardi", "suffix": "" } ], "year": 1999, "venue": "Proceedings ICASSP", "volume": "", "issue": "", "pages": "789--792", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Malah, R. V. Cox and A. J. Accardi, \"Tracking speech-presence uncertainty to improve speech enhancement non-stationary noise environments,\" Proceedings ICASSP, pp. 789-792, 1999.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Auditory-based spectral amplitude estimators for speech enhancement", "authors": [ { "first": "E", "middle": [], "last": "Plourde", "suffix": "" }, { "first": "B", "middle": [], "last": "Champagne", "suffix": "" } ], "year": 2008, "venue": "Speech, and Language Processing", "volume": "16", "issue": "", "pages": "1614--1622", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Plourde and B. Champagne, \"Auditory-based spectral amplitude estimators for speech enhancement,\" IEEE Transactions, Audio, Speech, and Language Processing, vol. 16, pp. 1614-1622, 2008.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Speech enhancement by MAP spectral amplitude estimation using a super-Gaussian speech model", "authors": [ { "first": "T", "middle": [], "last": "Lotter", "suffix": "" }, { "first": "P", "middle": [], "last": "Vary", "suffix": "" } ], "year": 2005, "venue": "EURASIP, Applied Signal Processing", "volume": "7", "issue": "", "pages": "1110--1126", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Lotter and P. Vary, \"Speech enhancement by MAP spectral amplitude estimation using a super-Gaussian speech model,\" EURASIP, Applied Signal Processing, vol. 7, pp. 1110-1126, 2005.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A data-driven approach to a priori SNR estimation", "authors": [ { "first": "S", "middle": [], "last": "Suhadi", "suffix": "" }, { "first": "C", "middle": [], "last": "Last", "suffix": "" }, { "first": "T", "middle": [], "last": "Fingscheidt", "suffix": "" } ], "year": 2011, "venue": "Speech, and Language Processing", "volume": "19", "issue": "", "pages": "186--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Suhadi, C. Last and T. Fingscheidt, \"A data-driven approach to a priori SNR estimation,\" IEEE Transactions, Audio, Speech, and Language Processing, vol. 19, pp. 186-195, 2011.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Speech signal enhancement based on MAP algorithm in the ICA space", "authors": [ { "first": "Z", "middle": [], "last": "Xin", "suffix": "" }, { "first": "P", "middle": [], "last": "Jancovic", "suffix": "" }, { "first": "L", "middle": [], "last": "Ju", "suffix": "" }, { "first": "M", "middle": [], "last": "Kokuer", "suffix": "" } ], "year": 2008, "venue": "IEEE Transactions", "volume": "56", "issue": "", "pages": "1812--1820", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Xin, P. Jancovic, L. Ju and M. Kokuer, \"Speech signal enhancement based on MAP algorithm in the ICA space,\" IEEE Transactions, Signal Processing, vol. 56, pp. 1812-1820, 2008.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Speech enhancement using a soft-decision noise suppression filter", "authors": [ { "first": "R", "middle": [], "last": "Mcaulay", "suffix": "" }, { "first": "M", "middle": [], "last": "Malpass", "suffix": "" } ], "year": 1980, "venue": "Acoustics, Speech and Signal Processing", "volume": "28", "issue": "", "pages": "137--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. McAulay and M. Malpass, \"Speech enhancement using a soft-decision noise suppression filter,\" IEEE Transactions, Acoustics, Speech and Signal Processing, vol. 28, pp. 137-145, 1980.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Maximum likelihood based noise covariance matrix estimation for multi-microphone speech enhancement", "authors": [ { "first": "U", "middle": [], "last": "Kjems", "suffix": "" }, { "first": "J", "middle": [], "last": "Jensen", "suffix": "" } ], "year": 2012, "venue": "Proceedings EUSIPCO", "volume": "", "issue": "", "pages": "295--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "U. Kjems and J. Jensen, \"Maximum likelihood based noise covariance matrix estimation for multi-microphone speech enhancement,\" Proceedings EUSIPCO, pp. 295-299, 2012.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Spectral subtraction based on minimum statistics", "authors": [ { "first": "R", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1994, "venue": "Proceedings EUSIPCO", "volume": "", "issue": "", "pages": "1182--1185", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Martin, \"Spectral subtraction based on minimum statistics,\" Proceedings EUSIPCO, pp. 1182-1185, 1994.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Noise power spectral density estimation based on optimal smoothing and minimum statistics", "authors": [ { "first": "R", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2001, "venue": "IEEE Transactions, Speech and Audio Processing", "volume": "9", "issue": "", "pages": "504--512", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Martin, \"Noise power spectral density estimation based on optimal smoothing and minimum statistics,\" IEEE Transactions, Speech and Audio Processing, vol. 9, pp. 504-512, 2001.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "An experimental study on speech enhancement based on deep neural networks", "authors": [ { "first": "Y", "middle": [], "last": "Xu", "suffix": "" }, { "first": "J", "middle": [], "last": "Du", "suffix": "" }, { "first": "L.-R", "middle": [], "last": "Dai", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2014, "venue": "IEEE Signal Processing Letters", "volume": "21", "issue": "", "pages": "65--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Xu, J. Du, L.-R. Dai and C.-H. Lee, \"An experimental study on speech enhancement based on deep neural networks,\" IEEE Signal Processing Letters, vol. 21, pp. 65-68, 2014.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Speech enhancement based on deep denoising autoencoder", "authors": [ { "first": "X", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Tsao", "suffix": "" }, { "first": "S", "middle": [], "last": "Matsuda", "suffix": "" }, { "first": "C", "middle": [], "last": "Hori", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "436--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Lu, Y. Tsao, S. Matsuda and C. Hori, \"Speech enhancement based on deep denoising autoencoder,\" Interspeech 2013, pp. 436-440, 2013.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Speech enhancement using generative dictionary learning", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Sigg", "suffix": "" }, { "first": "T", "middle": [], "last": "Dikk", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Buhmann", "suffix": "" } ], "year": 2012, "venue": "Speech, and Language Processing", "volume": "20", "issue": "", "pages": "1698--1712", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. D. Sigg, T. Dikk and J. M. Buhmann, \"Speech enhancement using generative dictionary learning,\" IEEE Transactions, Audio, Speech, and Language Processing, vol. 20, pp. 1698-1712, 2012.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Speech denoising using nonnegative matrix factorization with priors", "authors": [ { "first": "K", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "B", "middle": [], "last": "Raj", "suffix": "" }, { "first": "S", "middle": [], "last": "Paris", "suffix": "" }, { "first": "A", "middle": [], "last": "Divakaran", "suffix": "" } ], "year": 2008, "venue": "Proceedings ICASSP", "volume": "", "issue": "", "pages": "4029--4032", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Wilson, B. Raj, S. Paris and A. Divakaran, \"Speech denoising using nonnegative matrix factorization with priors,\" Proceedings ICASSP, pp. 4029-4032, 2008.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Adaptive hybrid control for linear piezoelectric ceramic motor drive using diagonal recurrent CMAC network", "authors": [ { "first": "R.-J", "middle": [], "last": "Wai", "suffix": "" }, { "first": "C.-M", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Y.-F", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2004, "venue": "IEEE Transactions", "volume": "15", "issue": "", "pages": "1491--1506", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.-J. Wai, C.-M. Lin and Y.-F. Peng, \"Adaptive hybrid control for linear piezoelectric ceramic motor drive using diagonal recurrent CMAC network,\" IEEE Transactions, Neural Networks, vol. 15, pp. 1491-1506, 2004.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Self-Organizing CMAC Control for a Class of MIMO Uncertain Nonlinear Systems", "authors": [ { "first": "C.-M", "middle": [], "last": "Lin", "suffix": "" }, { "first": "T.-Y.", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions", "volume": "20", "issue": "", "pages": "1377--1384", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.-M. Lin and T.-Y. Chen, \"Self-Organizing CMAC Control for a Class of MIMO Uncertain Nonlinear Systems,\" IEEE Transactions, Neural Networks, vol. 20, pp. 1377-1384, 2009.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "RCMAC hybrid control for MIMO uncertain nonlinear systems using sliding-mode technology", "authors": [ { "first": "C.-M", "middle": [], "last": "Lin", "suffix": "" }, { "first": "L.-Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions", "volume": "18", "issue": "", "pages": "708--720", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.-M. Lin, L.-Y. Chen and C.-H. Chen, \"RCMAC hybrid control for MIMO uncertain nonlinear systems using sliding-mode technology,\" IEEE Transactions, Neural Networks, vol. 18, pp. 708-720, 2007.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Adaptive filter design using recurrent cerebellar model articulation controller", "authors": [ { "first": "C.-M", "middle": [], "last": "Lin", "suffix": "" }, { "first": "L.-Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Yeung", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions", "volume": "19", "issue": "", "pages": "1149--1157", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.-M. Lin, L.-Y. Chen and D. S. Yeung, \"Adaptive filter design using recurrent cerebellar model articulation controller,\" IEEE Transactions, Neural Networks, vol. 19, pp. 1149-1157, 2010.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A new approach to manipulator control: the cerebellar model articulation controller (CMAC)", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Albus", "suffix": "" } ], "year": 1975, "venue": "ASME Journal of Dynamic Systems, Measurement, and Control", "volume": "97", "issue": "", "pages": "228--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. S. Albus, \"A new approach to manipulator control: the cerebellar model articulation controller (CMAC),\" ASME Journal of Dynamic Systems, Measurement, and Control, vol. 97, pp. 228-233, 1975.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "text": "\u3001CMAC \u8207 MMSE \u65b9\u6cd5\u6bd4\u8f03 \u5c07\u5be6\u9a57\u4e2d\u8655\u7406\u6548\u679c\u6700\u597d\u7684 CMAC \u8a2d\u5b9a\u8207 MMSE \u65b9\u6cd5\u505a\u6bd4\u8f03\u3002 \u5728\u672c\u5be6\u9a57\u4e2d CMAC \u7279\u5fb5\u5982\u4e0b\uff1a 1. \u5c64\u6578(Layer)\uff1a3(Layer) 2. \u4e0a\u754c(Upper bound)\uff1a6\uff1b\u4e0b\u754c(Lower bound)\uff1a\u22126 3. \u4e00\u5c64\u5167\u7684\u584a\u6578(N B )\uff1aceil(106N e /3Layer) = 36 4. \u63a5\u53d7\u57df\u6578(N R )\uff1a\u584a\u6578(N B ) 5. \u806f\u60f3\u8a18\u61b6\u7a7a\u9593\u51fd\u6578\uff1a\u03c6 ij = exp [\u2212(x i \u2212 m ij ) ] for i = 1 and j = 1, \u22ef , N R \u5176\u4e2d ceil \u4ee3\u8868\u9918\u6578\u7121\u689d\u4ef6\u9032\u4f4d\u3002\u4e0a\u754c\u548c\u4e0b\u754c\u9700\u8981\u5305\u542b\u6240\u6709\u8a9e\u97f3\u8a0a\u865f\u53c3\u6578\uff0c\u4e8b\u524d\u8981\u5148\u5075\u6e2c \u8a9e\u97f3\u8a0a\u865f\u53c3\u6578\u7684\u7bc4\u570d\u3002\u9ad8\u65af\u51fd\u6578\u7684\u5e73\u5747\u503c\u521d\u59cb\u503c(m ij )\u8a2d\u7f6e\u662f\u81ea\u52d5\u8abf\u6574\u5728\u6bcf\u584a(N B )\u7684\u6b63\u4e2d \u9593\uff0c\u8b8a\u7570\u6578\u521d\u59cb\u503c(\u03c3 ij ) = 1\uff0c\u6b0a\u91cd\u521d\u59cb\u503c( j ) = 0\u3002\u5b78\u7fd2\u7387\u00b5 s = \u00b5 w = \u00b5 m = \u00b5 \u03c3 = 0.05\u3002 \u8868\u4e00\u81f3\u8868\u4e09\u70ba CMAC \u8207 MMSE \u65b9\u6cd5\u4f7f\u7528\u4e09\u7a2e\u8a9e\u97f3\u8a55\u4f30\u65b9\u6cd5\u6bd4\u8f03\u6548\u679c\u3002 \u8868\u4e00\u3001 CMAC \u65b9\u6cd5\u53ca MMSE \u65b9\u6cd5\u7684\u2206PESQ \u6548\u679c\u6bd4\u8f03", "uris": null }, "TABREF1": { "num": null, "type_str": "table", "html": null, "content": "
CMAC \u7684\u5b78\u7fd2\u7b97\u6cd5\u662f\u8003\u616e\u5982\u4f55\u7372\u5f97\u68af\u5ea6\u5411\u91cf\uff0c\u5728\u6bcf\u500b\u8abf\u7bc0\u503c\u7684\u5b78\u7fd2\u7b97\u6cd5\u88ab\u5b9a\u7fa9\u70ba\u76ee\u6a19 \u51fd\u6578(Objective function)\u76f8\u5c0d\u65bc\u8f38\u5165\u53c3\u6578\u7684\u5c0e\u6578\uff0c\u76ee\u6a19\u51fd\u6578\u8868\u793a\u70ba(6)\u5f0f En(k) = 1 2 (d(k) \u2212 y(k)) 2 = 1 2 e 2 (k) \u5728\u5be6\u9a57\u65b9\u9762\uff0c\u5148\u628a\u4e7e\u6de8\u8a9e\u97f3\u53ca\u5e36\u96dc\u8a0a\u8a9e\u97f3\u53d6\u97f3\u6846(Framing)\uff0c\u56e0\u70ba\u8a9e\u97f3\u8a0a\u865f\u662f\u9023\u7e8c\u6642\u8b8a (6) \u5176\u4e2d\u8aa4\u5dee\u8a0a\u865fe(k) = d(k) \u2212 y(k)\uff0c\u8868\u793a\u6240\u5e0c\u671b\u7684\u97ff\u61c9d(k)\u548c\u6ffe\u6ce2\u5668\u8f38\u51fay(k)\u4e4b\u9593\u7684\u8aa4\u5dee\u3002 \u5728\u4f7f\u7528\u76ee\u6a19\u51fd\u6578En\u6642\uff0c\u6839\u64da\u6b78\u4e00\u5316\u68af\u5ea6\u4e0b\u964d\u6cd5\u53ef\u4ee5\u884d\u751f(7)\u5f0f\uff0c\u4f7f\u7528\u9023\u9396\u5f8b(Chain rule) \u65b9\u6cd5\u7372\u5f97\u3002 s(k + 1) = s(k) + \u03bc s e(k)P s (k) (7) \u5176\u4e2d\u00b5 s \u662f\u5b78\u7fd2\u7387(Learning rate)\uff0c\u5728 (7)\u5f0f\u4e2ds\u53ef\u66ff\u63db\u6210 , m , \u03c3\uff0c\u5206\u5225\u4ee3\u8868\u662f\u6b0a\u91cd\u3001\u5e73\u5747\u503c\u3001 \u8b8a\u7570\u6578\u7684\u66f4\u65b0\u6cd5\uff0cP s (k)\u5728(7)\u5f0f\u4e2d\u53ef\u4ee5\u66ff\u63db\u70ba P w (k) = \u2202y \u2202 j = [ \u2202y \u2202 1 , \u22ef , \u2202y \u2202 j , \u22ef , \u2202y \u2202 N R ] T (8) P m (k) = \u2202y \u2202m ij = [ \u2202y \u2202m 11 , \u22ef , \u2202y \u2202m N1 , \u22ef , \u2202y \u2202m 1j , \u22ef , \u2202y \u2202m Nj , \u22ef , \u2202y \u2202m 1N R , \u22ef , \u2202y \u2202m NN R ] T (9) P \u03c3 (k) = \u2202y \u2202\u03c3 ij = [ \u2202y \u2202\u03c3 11 , \u22ef , \u2202y \u2202\u03c3 N1 , \u22ef , \u2202y \u2202\u03c3 1j , \u22ef , \u2202y \u2202\u03c3 Nj , \u22ef , \u2202y \u2202\u03c3 1N R , \u22ef , \u2202y \u2202\u03c3 NN R ] T (10) \u6700\u5f8cP s (k)\u53ef\u4ee5\u63a8\u5c0e\u6210\u4ee5\u4e0b\u5f0f\u5b50 \u2202y \u2202 j = b j (11) \u2202y \u2202m ij = j b j 2(x i \u2212 m ij ) (\u03c3 ij ) 2 (12) \u2202y \u2202\u03c3 ij = j b j 2(x i \u2212 m ij ) 2 (\u03c3 ij ) 3 (13) \u56db\u3001\u5be6\u9a57\u8207\u8a55\u4f30 (\u4e00)\u3001\u8a55\u4f30\u65b9\u6cd5 \u5728\u8a55\u4f30\u65b9\u9762\uff0c\u6211\u5011\u7528\u4e86\u4e09\u7a2e\u8a9e\u97f3\u8a55\u4f30\u65b9\u6cd5\u4f86\u505a CMAC \u53ca MMSE \u6d88\u9664\u96dc\u97f3\u7684\u6578\u503c\u6bd4\u8f03\uff0c \u5206\u5225\u70ba(Perceptual Evaluation of Speech Quality, PESQ)\u3001(Segmental Signal-to-Noise Ratio, SSNR)\u4ee5\u53ca(Speech Distortion Index, SDI)\u3002 \u9996\u5148\u5c07\u7c21\u55ae\u4ecb\u7d39\u9019\u4e09\u7a2e\u8a55\u4f30\u65b9\u6cd5\uff1a 1. Perceptual Evaluation of Speech Quality (PESQ)\u7684\u8a55\u50f9\u65b9\u6cd5\u662f\u4ee5\u570b\u969b\u96fb\u4fe1\u806f\u76df(ITU-T) \u6a19\u6e96\u70ba\u57fa\u790e\uff0c\u70ba\u4e00\u5957\u5ba2\u89c0\u8a55\u50f9\u8a9e\u97f3\u54c1\u8cea\u7684\u65b9\u6cd5\uff0c\u6bd4\u8f03\u65b9\u6cd5\u662f\u6bd4\u8f03\"\u589e\u5f37\u5f8c\u8a9e\u97f3\"\u8207\" \u539f\u59cb\u4e7e\u6de8\u8a9e\u97f3\"\u4e4b\u9593\u7684\u5dee\u7570\uff0cPESQ \u7684\u5206\u6578\u7bc4\u570d\u70ba 0.5 \u5230 4.5 \u5206\uff0c\u5206\u6578\u8d8a\u9ad8\u4ee3\u8868\u8d8a\u63a5 \u8fd1\u539f\u59cb\u4e7e\u6de8\u8a9e\u97f3\u3002\u5728\u672c\u5be6\u9a57\u662f\u5c07\"\u589e\u5f37\u5f8c\u8a9e\u97f3\u7684 PESQ\"\u8207\"\u5e36\u96dc\u8a0a\u8a9e\u97f3\u7684 PESQ\"\u76f8\u6e1b\uff0c \u89c0\u5bdf\u8a9e\u97f3\u54c1\u8cea\u7684\u589e\u52a0\u91cf\uff0c\u5373\u5206\u6578\u8d8a\u9ad8\u8d8a\u597d\u3002PESQ \u53ef\u4ee5\u8868\u793a\u70ba(14)\u5f0f \u2206PESQ = PESQ en \u2212 PESQ noise (14) P clea n P en \u2212 P clea n P noise = A clea n 2 A en 2 \u2212 A clea n 2 A nois e 2 (15) \u70ba\u4e7e\u6de8\u8a9e\u97f3\u632f\u5e45\uff0c\u4ee5\u6b64\u985e\u63a8\u3002 3. Speech Distortion Index (SDI)\u662f\u6bd4\u8f03\"\u589e\u5f37\u5f8c\u8a9e\u97f3\u8a0a\u865f\"\u8207\"\u539f\u59cb\u4e7e\u6de8\u8a9e\u97f3\u8a0a\u865f\"\u7684\u80fd \u91cf\u5dee\u503c\uff0c\u5373\u8a08\u7b97\u589e\u5f37\u5f8c\u8a9e\u97f3\u7684\u5931\u771f\u91cf\uff0c\u672c\u5be6\u9a57\u662f\u5c07\"\u5e36\u96dc\u8a0a\u8a9e\u97f3\u7684 SDI\"\u8207\"\u589e\u5f37\u5f8c\u8a9e \u97f3\u7684 SDI\"\u76f8\u6e1b\uff0c\u89c0\u5bdf\u8a9e\u97f3\u5931\u771f\u503c\u6e1b\u5c11\u91cf\uff0c\u5373\u5206\u6578\u8d8a\u9ad8\u8d8a\u597d\u3002SDI \u53ef\u4ee5\u8868\u793a\u70ba(16)\u5f0f \u2206SDI = E[(S clea n [n] \u2212 S nois e [n]) 2 ] E [S clea n 2 [n]] \u2212 E[(S clea n [n] \u2212 S en [n]) 2 ] E [S clea n 2 [n]] (16) \u5176\u4e2dS clea n [n]\u70ba\u539f\u4e7e\u6de8\u8a9e\u97f3\u8a0a\u865f\uff0cS noise [n]\u70ba\u5e36\u96dc\u8a0a\u8a9e\u97f3\u8a0a\u865f\uff0cS en [n]\u70ba\u589e\u5f37\u5f8c\u8a9e\u97f3 \u8a0a\u865f\u3002 (\u4e8c)\u3001\u5be6\u9a57\u65b9\u6cd5 \u5728\u97f3\u6e90\u5eab\u65b9\u9762\uff0c\u6211\u5011\u7528\u4e86\u4e09\u7a2e\u4e0d\u540c\u7684\u74b0\u5883\u914d\u5408\u516d\u7a2e\u4e0d\u540c\u7684\u8a0a\u96dc\u6bd4(SNR)\uff0c\u74b0\u5883\u5206\u5225\u6709\u4eff \u500b\u5e36\u96dc\u8a0a\u8a9e\u97f3\u97f3\u6a94\u53ca 300 \u500b\u4e7e\u6de8\u8a9e\u97f3\u97f3\u6a94\u3002 \u4e09\u7a2e\u96dc\u97f3\u985e\u578b\uff1a 1. \u4eff\u4eba\u8072\u96dc\u97f3(SSN)\uff1a\u80fd\u91cf\u5206\u4f48\u5e73\u5747\uff0c\u4f46\u5728\u4e2d\u983b\u6709\u8f03\u9ad8\u7684\u80fd\u91cf\u5206\u4f48\u3002 2. \u8eca\u5b50\u96dc\u97f3(Car)\uff1a\u5728\u4f4e\u983b\u6709\u8f03\u9ad8\u7684\u80fd\u91cf\uff0c\u8d8a\u5f80\u9ad8\u983b\u80fd\u91cf\u5206\u4f48\u905e\u6e1b\u3002 3. \u7c89\u7d05\u96dc\u97f3(Pink)\uff1a\u80fd\u91cf\u5206\u4f48\u5e73\u5747\uff0c\u4f46\u5728\u4f4e\u983b\u6709\u8f03\u9ad8\u7684\u80fd\u91cf\u5206\u4f48\u3002 \u983b\u8b5c\u5716(0dB) \u4e7e\u6de8\u8a9e\u97f3(Clean) \u4eff\u4eba\u8072\u96dc\u97f3(SSN) \u8eca\u5b50\u96dc\u97f3(Car) \u7c89\u7d05\u96dc\u97f3(Pink) (Time-varying)\uff0c\u53d6\u5b8c\u97f3\u6846\u5f8c\uff0c\u53ef\u5c07\u8a9e\u97f3\u8a0a\u865f\u8996\u70ba\u4e00\u500b\u56fa\u5b9a\u9031\u671f\u7684\u8a0a\u865f\uff0c\u4ee5\u5229\u65bc\u8655\u7406\uff0c \u672c\u5be6\u9a57\u4e2d\u6211\u5011\u7684\u97f3\u6846\u662f 32 \u6beb\u79d2(256/8K)\u3002\u800c\u5f8c\u5c07\u6bcf\u500b\u97f3\u6846\u7684\u8a0a\u865f\u4e58\u4e0a\u4e00\u500b\u56fa\u5b9a\u9577\u5ea6\u7684 \u8996\u7a97(Hamming Window)\uff0c\u4e3b\u8981\u7684\u76ee\u7684\u70ba\u5f37\u8abf\u8996\u7a97\u4e2d\u9593\u7684\u4e3b\u8981\u8a0a\u865f\uff0c\u4e26\u58d3\u6291\u8996\u7a97\u5169\u5074\u7684 \u8a0a\u865f\u3002\u4e4b\u5f8c\u5c07\u5e36\u96dc\u8a0a\u8a9e\u97f3\u8a0a\u865f\u505a\u5feb\u901f\u5085\u7acb\u8449\u8f49\u63db(Fast Fourier Transform, FFT)\uff0c\u5f97\u5230 256 \u500b\u503c\uff0c256 \u500b\u503c\u518d\u8f49\u5230\u6885\u723e\u983b\u7387\u57df(Mel-frequency domain)\u4e0a\u58d3\u7e2e\u6210 80 \u500b\u503c\u3002\u5be6\u9a57\u6642\uff0c\u4f7f \u7528\u5176\u4e2d 250 \u500b\u5e36\u96dc\u8a0a\u8a9e\u97f3\u505a\u8a13\u7df4\u8a9e\u6599(Training) \uff0c\u53e6\u5916 50 \u500b\u5e36\u96dc\u8a0a\u8a9e\u97f3\u505a\u6e2c\u8a66\u8a9e\u6599(Test) \u3002 \u8a13\u7df4\u6642\uff0c\u5c07 250 \u500b\u5e36\u96dc\u8a0a\u8a9e\u97f3\u4e32\u5728\u4e00\u8d77(\u540c\u74b0\u5883\u4e14\u540c\u8a0a\u96dc\u6bd4\u7684\u97f3\u6a94)\u6210\u6578\u64da\u5eab\uff0c\u800c\u5f8c\u96a8\u6a5f \u62bd\u53d6\u5176\u4e2d 80000 \u9ede\u7576\u8a13\u7df4\u6578\u64da\uff0c\u56e0\u70ba\u8a9e\u97f3\u662f\u4e8c\u7dad\u7684\uff0c\u6240\u4ee5\u7e3d\u5171\u6709 80(\u983b\u57df)\u00d780000(\u8a13\u7df4 \u6578\u64da)\u9ede\u7684\u8a13\u7df4\u6578\u64da\uff0c\u4e7e\u6de8\u8a9e\u97f3\u5247\u6c92\u6709\u4e0d\u540c\u8a0a\u96dc\u6bd4\u7684\u72c0\u6cc1\uff0c\u4f46\u8655\u7406\u4ea6\u76f8\u540c\uff0c\u540c\u6a23\u6709 80(\u983b \u57df)\u00d780000(\u8a13\u7df4\u6578\u64da)\u9ede\u7684\u8a13\u7df4\u6578\u64da\uff0c\u5e36\u96dc\u8a0a\u8a9e\u97f3\u53ca\u4e7e\u6de8\u8a9e\u97f3\u7684\u6240\u6709\u9ede\u662f\u4e92\u76f8\u5c0d\u61c9\uff0c\u9ede \u5c0d\u9ede\u505a\u5b78\u7fd2\uff0c\u6bcf\u4e00\u500b\u983b\u7387\u5b78\u7fd2\u51fa\u4e00\u7d44 CMAC Model\uff0c\u7e3d\u5171\u5b78\u7fd2\u51fa 80 \u7d44 CMAC Model\uff0c \u865f\u3002 (A) (B) \u5176\u4e2dPESQ \u2206SSNR = \u5716\u4e09\u3001CMAC \u8a9e\u97f3\u589e\u5f37\u7cfb\u7d71\u65b9\u584a\u5716 (A)\u8a13\u7df4 (B)\u6e2c\u8a66 |
\u5716\u4e8c\u3001\u5be6\u9a57\u8a9e\u97f3\u983b\u8b5c\u5716(0dB)\uff0c\u984f\u8272\u70ba\u85cd\u8272\u4ee3\u8868\u7121\u80fd\u91cf\uff0c\u984f\u8272\u8d8a\u7d05\u4ee3\u8868\u80fd\u91cf\u8d8a\u5927\u3002 |