{ "paper_id": "O04-3006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:00:26.646261Z" }, "title": "An Innovative Distributed Speech Recognition Platform for Portable, Personalized and Humanized Wireless Devices", "authors": [ { "first": "Yin-Pin", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "Advanced Technology Center, Computer and Communications Research Laboratories", "institution": "", "location": {} }, "email": "yinpinyang@itri.org.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In recent years, the rapid growth of wireless communications has undoubtedly increased the need for speech recognition techniques. In wireless environments, the portability of a computationally powerful device can be realized by distributing data/information and computation resources over wireless networks. Portability can then evolve through personalization and humanization to meet people's needs. An innovative distributed speech recognition (DSR) [ETSI, 1998],[ETSI, 2000] platform, configurable DSR (C-DSR), is thus proposed here to enable various types of wireless devices to be remotely configured and to employ sophisticated recognizers on servers operated over wireless networks. For each recognition task, a configuration file, which contains information regarding types of services, types of mobile devices, speaker profiles and recognition environments, is sent from the client side with each speech utterance. Through configurability, the capabilities of configuration, personalization and humanization can be easily achieved by allowing users and advanced users to be involved in the design of unique speech interaction functions of wireless devices.", "pdf_parse": { "paper_id": "O04-3006", "_pdf_hash": "", "abstract": [ { "text": "In recent years, the rapid growth of wireless communications has undoubtedly increased the need for speech recognition techniques. In wireless environments, the portability of a computationally powerful device can be realized by distributing data/information and computation resources over wireless networks. Portability can then evolve through personalization and humanization to meet people's needs. An innovative distributed speech recognition (DSR) [ETSI, 1998],[ETSI, 2000] platform, configurable DSR (C-DSR), is thus proposed here to enable various types of wireless devices to be remotely configured and to employ sophisticated recognizers on servers operated over wireless networks. For each recognition task, a configuration file, which contains information regarding types of services, types of mobile devices, speaker profiles and recognition environments, is sent from the client side with each speech utterance. Through configurability, the capabilities of configuration, personalization and humanization can be easily achieved by allowing users and advanced users to be involved in the design of unique speech interaction functions of wireless devices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the current wireless era, cellular phones have become daily-life necessities. People carry their own handsets and make phone calls anytime, everywhere, while public payphones have almost disappeared. Inspired by this vast number of mobile phone users, the wireless communication industry is developing wireless data services to create more profit. Wireless devices can be treated as terminals of an unbounded information/data networkthe Internet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "However, the small screen sizes of mobile devices discourage users from surfing the Internet in mobile situations. Wireless data services are not as attractive as was expected, and this is one of the major reasons for the so-called \"3G Bubble\" [Baker, 2002] [Reinhardt et al, 2001 ].", "cite_spans": [ { "start": 244, "end": 257, "text": "[Baker, 2002]", "ref_id": "BIBREF2" }, { "start": 258, "end": 280, "text": "[Reinhardt et al, 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "On the other hand, the handset market is still blooming. Personal, stylish and fashionable features, such as ring tones, color screen displays, covers, and so on, are all very popular, especially among teenagers. Functionally speaking, portable devices, such as PDAs, pocket/palm PCs and digital cameras, are now integrated with handsets. Many interesting applications, such as portable electronic dictionaries, map navigators, and mobile learning, can be built into mobile devices. However, these functions or services still cannot create serious business opportunities for telecom companies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "What will future appealing services for cell phones be? \"Talking to a machine,\" or interacting with a machine, might be a candidate. That is, besides talking to human-beings through voice channels, people may like to talk to machines and access the Internet through data channels. The possibilities are unlimited. Handsets may thus evolve into personal \"intimate pets\" that people will use from childhood to grownup. In this scenario, speech interaction will play an important part in humanizing devices [Hiroshi et al. 2003 ]. However, due to the limitations of the current state-of-art speech recognition techniques, the robustness issue [Deng et al. 2003] [Wu et al. 2003] [Lee 1998 ] is always a bottleneck in commercializing speech recognition products. This imperfection reveals the importance of configurability. In the following paragraphs, the relationships among configurability, personalization, and wireless environments will be explored.", "cite_spans": [ { "start": 504, "end": 524, "text": "[Hiroshi et al. 2003", "ref_id": null }, { "start": 640, "end": 658, "text": "[Deng et al. 2003]", "ref_id": "BIBREF5" }, { "start": 659, "end": 675, "text": "[Wu et al. 2003]", "ref_id": "BIBREF6" }, { "start": 676, "end": 685, "text": "[Lee 1998", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "How does a speech recognition system fit into a the wireless network? In this paper, we will primarily highlight two key terms \"distributed\" and \"configurable.\" The term \"distributed\" can be interpreted as follows: computation distributed and data distributed. As for the former, normally speech recognition functions are needed in mobile situations, and devices are usually thin and lacking in computational power. It would be much easier to design speech recognition functions if the computation involved in recognition processes would be distributed over wireless networks by means of a client-server architecture. As for the latter, speech recognition is by nature a pattern matching process, which needs to acquire utterances within a given application domain. For example, a speaker-independent (SI) continuous digit recognizer targeting the Taiwanese market needs to acquire a great large number of sample continuous digit utterances from all dialects in this market. The representation and quality of the sample utterances used for training or adaptation can seriously dominate the performance of a speech recognizer. If a wireless network is used, speech data acquisition can be done in a much more efficient and systematical way. More importantly, the acquired speech data, labeled by means of a speaker profile, recognition environment, and device/microphone type, can be kept on the server. Speech data will thus not be abandoned when particular applications or services are discontinued.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition and Wireless Environments", "sec_num": null }, { "text": "From the above, we can conclude that we need a centralized speech recognition server embedded in the wireless infrastructure. When we say \"talking to a machine\", the \"machine\" is actually an entire wireless network. People talk to the same lifetime recognizer, and the recognizer evolves continuously. This speech recognition server can provide any types speech recognition services (computation distributed) for all classes of wireless mobile devices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition and Wireless Environments", "sec_num": null }, { "text": "These services continuously acquire speech data from all locations (data distributed), and adapt the engine performance all the time. For each recognition task, there is a \"configuration file\" (or, say, a tag) to record all of the information regarding types of services, speaker profiles, recognition environments, etc. We call this type of server a configurable distributed speech recognition (C-DSR) server.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition and Wireless Environments", "sec_num": null }, { "text": "In the following, the history of DSR developed by ETSI/Aurora will be briefly described.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition and Wireless Environments", "sec_num": null }, { "text": "Then, the innovative C-DSR platform will be introduced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition and Wireless Environments", "sec_num": null }, { "text": "Instead of squeezing the whole recognizer into a thin device, it seems more reasonable to host recognition tasks on a server and exchange information between the client and server.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed Speech Recognition (DSR) developed by ETSI/Aurora", "sec_num": null }, { "text": "However, due to the low bit-rates of speech coders (note that coders are designed for humans, not recognizers), the speech recognition performance can be significantly degraded. The DSR, proposed by ETSI Aurora, overcomes these problems by distributing the recognition process between the client and server, by using an error protected data channel to send parameterized speech features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed Speech Recognition (DSR) developed by ETSI/Aurora", "sec_num": null }, { "text": "Aurora DSR can be seen as a speech \"coder\" [Digalakis et al. 1999] designed to enable handset users to talk to their recognizers. Besides handsets, there are many other mobile devices that need DSR services, and they all operate in different environments and in different recognition modes. Each combination or, say, configuration, needs its own \"coder\" to achieve better performance. Based on these needs, C-DSR was built as an integrated client-server platform which not only offers a convenient way to construct speech recognition functions on various client devices, but also provides powerful utilities/tools to assist each configuration to obtain its own coder in order to increase the overall recognition task completion rate. To achieve these goals, C-DSR maximizes the advantages of data channels and centralized servers by means of its \"configurable\" capability -configurability. The configurability can be considered from two points of views.", "cite_spans": [ { "start": 43, "end": 66, "text": "[Digalakis et al. 1999]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "From DSR to Configurable DSR (C-DSR)", "sec_num": null }, { "text": "From the client side viewpoint, speech recognition processing is configurable to work with: (1) various kinds of thin to heavy mobile devices, ranked according to their computational power, noting that most of devices do not have sufficient computational power to perform the feature extraction process proposed by ETSI Aurora; (2) various types of recognition environments, such as offices, homes, streets, cars, airports, etc; this information about recognition environments can help recognition engines achieve greater accuracy; (3) various types of recognition services, such as command-based, grammar-based, speaker independent/ de--pendent mixed mode, and dialogue style services; (4) various speaker profiles since speaker information can help recognizers achieve higher recognition rates 1 and are required by recognition applications, such as speaker adaptation [Lee et al.1999 ] [Chen et al.1990] , speaker verifications identification [Siohan et al.1998 ]. The C-DSR platform provides a faster and more flexible way to construct various speech recognition functions for various mobile devices used in various recognition environments. One of the major missions of C-DSR is to increase the frequency of speech recognition use in daily life.", "cite_spans": [ { "start": 871, "end": 886, "text": "[Lee et al.1999", "ref_id": null }, { "start": 889, "end": 906, "text": "[Chen et al.1990]", "ref_id": null }, { "start": 946, "end": 964, "text": "[Siohan et al.1998", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Client", "sec_num": null }, { "text": "From the viewpoint of the centralized server, the C-DSR server collects from all of the registrant clients speech utterances or formatted speech feature arrays along with their configuration tags. The basic idea is to formalize the life cycle of a speech recognition product/task from the deployment phase, to the diagnostic phase, tuning phase, and upgrading phase. Also, similar tasks can share corresponding information and adaptation data located on the server. The C-DSR server offers the following mechanisms to fully take advantage of these categorized speech and configuration data: (1) the C-DSR server can decide which recognition engine or which acoustic HMM model to employ according to the history log;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Server", "sec_num": null }, { "text": "(2) the C-DSR server can balance the trade-offs among communication bandwidth, system load and recognition accuracy;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Server", "sec_num": null }, { "text": "(3) the categorized and organized speech database can be utilized to create diagnostic tools that can be used to tune-up recognition engines and to perform all kinds", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Server", "sec_num": null }, { "text": "of adaptation, such as speaker adaptation, channel adaptation [Siohan et al.1995] and background noise adaptation [Kristjansson et al.2001] .", "cite_spans": [ { "start": 62, "end": 81, "text": "[Siohan et al.1995]", "ref_id": null }, { "start": 114, "end": 139, "text": "[Kristjansson et al.2001]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Personalized and Humanized Wireless Devices", "sec_num": null }, { "text": "In summary, C-DSR is a generic speech recognition engine, in which all of the information, or parameters, concerning application-dependent user profiles and device profiles are kept in a configuration file which is initiated at the client side. In technical terms, C-DSR is a platform, while from customer's point of view, C-DSR is a personally owned, lifetime speech recognition engine. The C-DSR platform is embedded in the wireless network in contrast to conventional speech recognizers that are treated as input interfaces for portable devices (see Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 553, "end": 561, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Personalized and Humanized Wireless Devices", "sec_num": null }, { "text": "In the following, the architecture of C-DSR is described in section II. Then in section III, a demo system is presented to demonstrate the unique capability, that is, configurability, of C-DSR. Some conclusions are drawn in the final section. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": null }, { "text": "The function blocks of the C-DSR development platform are shown in Figure 2 . A wireless device equipped with the C-DSR Client connects to a remote C-DSR Server using the C-DSR Protocol through a wireless network. The C-DSR Protocol sends speech data and parameters.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The C-DSR Architecture", "sec_num": "2." }, { "text": "The speech data can be in the form of PCM raw speech, ADPCM or pre-defined compressed speech features, depending on the computation power and bit-rate (communication bandwidth).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Architecture", "sec_num": "2." }, { "text": "The configuration file with speech data prepared by the client is, thus, transmitted by the C-DSR Protocol thru wireless networks. For now, the C-DSR Protocol is implemented on top of TCP/IP or RTP (Real Time Transit Protocol). After parsing the received protocol, the Configuration Controller (CC) decides how to configure the recognition engine (C-DSR Engine) and Dialogue System (DS) to accomplish the recognition task. The C-DSR engine and DS engine are composed of modulized components such that switches inside the engines can be shifted to corresponding components to perform the functionalities requested by the configuration. The recognition results are then logged and organized in the History Log Center (2) Centralized server makes maintenance, tuning, upgrading easier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Architecture", "sec_num": "2." }, { "text": "(1) User always talks to the same recognizer thru different clients devices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Architecture", "sec_num": "2." }, { "text": "(3) Provides all kinds of services. Users may even design their own interaction scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The C-DSR Architecture", "sec_num": "2." }, { "text": "(HLC), resulting in a formatted package, or a database. The package is then passed to the Diagnostic Center (DC), where, diagnostic tools are used to tune-up the engines and provides adaptation data for various kinds of adaptation schemes, such as speaker/channel adaptation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized and Humanized Wireless Devices", "sec_num": null }, { "text": "The configuration of the C-DSR platform is stored in a Configuration File (CF). Each field in the CF can be categorized into three attributes, rSR, rDSR, and Interactive Design-It-Yourself (I-DIY), which are explained below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A. Configuration File and Configuration Controller, CC", "sec_num": null }, { "text": "rSR is the difference between the perfect and a current practical state-of-art speech recognition engine. The SR engine is never perfect. However, if we can restrict the speaking styles of the users, the recognition accuracy will be higher.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "i.", "sec_num": null }, { "text": "ii. rDSR refers to those configurable parameters which can minimize the degradation due to wireless transmission loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "i.", "sec_num": null }, { "text": "iii. IDIY means Interactive Design-It-Yourself. We can never expect a machine to act ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "i.", "sec_num": null }, { "text": "exactly like a human being. The philosophy behind C-DSR is to make human-machine interaction simple and easy. The best way to achieve this goal is to involve users in design. Thus, we provide DIY tools to enable users to design their own ways of talking to machines. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-DSR Server", "sec_num": null }, { "text": "The original design principle behind the C-DSR platform is to remotely configure the speech recognition engine on the server from the client side. It is the client device that initially prepares all of the configuration files. However, some of the configuration parameters may not be fully determined by the client device or may even be totally empty. In this case, the CC of the server should be able to append or modify these parameters by utilizing all available resources, including the historical configurations or statistics located in the server.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I-DIY", "sec_num": null }, { "text": "The configurable engine is the heart of the C-DSR server. As the name indicates, a configurable engine is an SR engine which is modulized and can be configured according to different requests requested from the various types of clients; Figure 3 shows the modules of the engine, which is a generalized SR engine on which state-of-art SR techniques can be employed. These typical modules (in a traditional/generalized SR engine) are listed below: Intermediate data between modules are also generated to provide \"symptoms\" useful for diagnostic purposes. These symptoms include: \u00a7 speech segmentation boundaries, \u00a7 the Viterbi resulting path, \u00a7 likelihood trajectories along the time axis on the resulting path, \u00a7 a histogram of observations (feature vectors) for a particular Gaussian mixture, \u00a7 a histogram of the observing likelihood of a particular HMM state ", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 245, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "B. Configurable Engine", "sec_num": null }, { "text": "A generic DS is, firstly, responsible for preparing a grammar, including vocabulary needed for the next speech recognition process. Then, the grammar with the incoming utterance is fed to the recognizer. The recognition result, a recognized key word, is then sent back to the DS. The DS then updates the \"dialog status\" records and determines the grammar for the next recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C. The Dialogue System, DS", "sec_num": null }, { "text": "Currently, we support AIML (Artificial Intelligence Markup Language, www.alicebot.org) and the simplified VoiceXML format used to describe dialogue scripts (see Figure 4 ).", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 169, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "C. The Dialogue System, DS", "sec_num": null }, { "text": "As described earlier, the so-called symptoms are intermediate data obtained while the engine is running and sent to the DC. The main purpose of the DC is to analyze and diagnose these symptoms in order to make suggestions. Thus, the C-DSR server is faced with various types of services, various environment configurations, and various types of client devices and speakers, so we want to make the engine core to be as generalized as possible. Currently, all of the diagnostics are done manually, which means that the DC only display to users or C-DSR server maintainers. We plan to make the DC automatically in the next generation of C-DSR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D. The Diagnostic Center, DC", "sec_num": null }, { "text": "The HLC is responsible for collecting and logging all of the corresponding information for each recognition service. The information collected includes speech utterances, formatted feature arrays, configuration files and the intermediate data, that is, symptoms and recognition results, and is saved to a corresponding user directory according to the user registration ID.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E. The History Log Center (HLC)", "sec_num": null }, { "text": "The HLC serves as a database manager, whose job functions includes: (i) maintaining the database, if necessary, and creating a mechanism to eliminate garbage data; (ii) building data links to prepare adaptation data for various types of adaptation algorithms, for speaker or channel adaptation; (iii) preparing intermediate data for the DC to diagnose, so that the DC can provide data for the C-DSR engine to perform to tune algorithms and improve recognition accuracy. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E. The History Log Center (HLC)", "sec_num": null }, { "text": "In a laboratory, several speech recognition applications may be emulated by a PDA serving as a client on the C-DSR platform. These recognition applications are usually realized on stand-alone portable devices or server-based dialogue systems. Now, using the proposed C-DSR solutions, thin client devices can take advantage of powerful, wireless servers to perform sophisticated speech recognition functions (see Figure 5 ), including the following: \u00a7 car agentretrieving map/travel/hotel information through GPRS network in a car; \u00a7 a personal inquiry systema portable device which can retrieve stock/weather information anywhere through a GPRS network; \u00a7 general-purpose remote controlin a WLAN 802.11b environment, a remote control which can be used to control a TV, stereo, air-conditioner, etc., through infrared rays by using natural language commands; \u00a7 Sim-Librariana portable device, which, when a person walks into a library, can be used to ask for directions or for the location of a book the person is searching for. 2. Environmental noise: this can be Quiet or Noisy. If this is skipped, the C-DSR Server will make a decision according to the history log. second. The user can determine his range of speaking speed, for instance, from three words per second to six words per second. If this is skipped, the C-DSR server will use default values. 4. Gender/Age/Accent: gender, age and accent information are very helpful for improving recognition performance. The C-DSR Client will retrieve and pass these pieces of information from the user/speaker profile to C-DSR Server for reference purposes. If this is skipped, the C-DSR Server will employ default models. [Scenario] Users may use their own PDAs or smart phones to access this service when entering the area covered by the WLAN.", "cite_spans": [], "ref_spans": [ { "start": 412, "end": 420, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "C-DSR Demo System", "sec_num": "3." }, { "text": "As in the previous case, we only need to change the field RecognitionStyle from", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Configuration Settings]", "sec_num": null }, { "text": "Command-based to Dialogue-ProvidedByServer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Configuration Settings]", "sec_num": null }, { "text": "[The Setup at the C-DSR Server]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Configuration Settings]", "sec_num": null }, { "text": "This example shows a dialogue system for a tourist guide. The content was provided by #ABNF 1.0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Configuration Settings]", "sec_num": null }, { "text": "$prefiller= \u8acb | \u9ebb\u7169 | \u4f60 | \u6211\u8981 $action1= \u958b{open} | \u95dc{close} $keyword1= ( \u71c8{light} | \u96fb\u71c8{light} | \u98a8\u6247{fan} | \u96fb \u98a8\u6247{fan} | \u96fb\u8996{tv} | \u6536\u97f3\u6a5f{radio} )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Configuration Settings]", "sec_num": null }, { "text": "Speaking of wireless mobile devices, conventionally, speech recognition is considered to be one of the available input methods for these devices. In this paper, we have presented the client-server C-DSR platform which is a centralized speech recognition server embedded in the wireless infrastructure. By using C-DSR, people talk to the same lifetime speech recognition system. Speech data and the corresponding configuration, which keeps all the records about recognition environments, device information, dialogue scripts, recognition results, and so on, will not be abandoned when particular applications or services are discontinued. The speech recognition server provides many types of services for all classes of wireless mobile devices, and these services continuously acquire speech data from all locations, and adapt the engine performance all the time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4." }, { "text": "Personalization and humanization are essential. We have seen many successful products come on the market. A humanized device does not have to be \"intelligent.\" As long as it \"looks\" intelligent and people find it interesting, we do not really need to make such a machine act exactly like a human being. People like to have their own ways to interact with their own personal devices/pets. Perhaps the Design-It-Yourself approach, getting people involved in the design process, is one good solution, and the \"configurability\" of C-DSR can surely provide such a platform to meet these needs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4." }, { "text": "For example, if we may provide gender information from speaker profile to speech recognizer, even a first-time speaker can obtain higher recognition accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Yu-Shan National Park. The C-DSR Platform provides several Dialogue Scripts. Here, we use VoiceXML as an example.[CDSR_VXML] \u60a8\u597d\uff0c\u65c5\u904a\u5c0e\u89bd\u7cbe\u9748\u5728\u6b64\u70ba\u60a8\u670d\u52d9 \u5f88\u597d\uff0c\u8b1d\u8b1d \u99ac\u99ac\u547c\u547c\u5566\uff0c\u8b1d\u8b1d \u53ef\u4ee5\u5566\uff0c\u8b1d\u8b1d \u68ee\u6797\u578b\u614b\u4ee5\u67f3\u6749\uff0c\u5929\u7136\u95ca\u8449\u6797\u6a1f\u6a39\uff0c\u53f0\u7063\u675c\u9d51\u70ba\u4e3b \u52d5\u7269\u578b\u614b\u6709\u677e\u9f20\uff0c\u7a7f\u5c71\u7532\uff0c\u53f0\u7063\u737c\u7334\uff0c\u53f0\u7063\u91ce\u5154\u7b49 \u9ce5\u985e\u578b\u614b\u6709\u7da0\u7e61\u773c\uff0c\u5c0f\u9daf\uff0c\u5de8\u5634\u9d09\uff0c\u4e94\u8272\u9ce5\u7b49 \u904a\u5ba2\u4e2d\u5fc3\uff1a\u5167\u8a2d\u9910\u98f2\u90e8\uff0c\u6703\u8b70\u5ba4\uff0c\u591a\u5a92\u9ad4\u7c21\u5831\u5ba4\u53ca\u751f\u614b\u6559\u80b2\u5c55\u793a\u9928 \u9910\u5ef3\uff1a\u9664\u53ef\u540c\u6642\u4f9b\u4e00\u767e\u4eba\u7528\u9910\u5916\uff0c\u4e26\u53ef\u4f5c\u70ba\u5927\u578b\u6703\u8b70\u5ba4\uff0c\u6559\u5ba4\u4f7f\u7528 \u884c\u653f\u7ba1\u7406\u4e2d\u5fc3\uff1a\u70ba\u672c\u5340\u5de5\u4f5c\u4eba\u54e1\u8655\u7406\u884c\u653f\u4e8b\u52d9\u7684\u8fa6\u516c\u5730\u9ede \u5316\u77f3\u5340\uff1a\u6b64\u751f\u75d5\u5316\u77f3\u662f\u5927\u7d04\u4e09\u842c\u5e74\u524d\u8766\uff0c\u87f9\u985e\u9032\u884c\u7bc9\u7a74\u5de5\u4e8b\u6642\u6240\u907a\u7559\u800c\u6210 \u9020\u6797\u7d00\u5ff5\u77f3\uff1a\u70ba\u524d\u65b0\u7af9\u5c71\u6797\u7ba1\u7406\u6240\u5927\u6eaa\u5206\u6240\uff0c\u5728\u6c11\u570b\u56db\u5341\u56db\u5e74\u70ba\u7d00\u5ff5 \u6771\u773c\u5c71\u9020\u6797\u5de5\u4f5c\u5be6\u7e3e\u800c\u5efa \u89aa\u5b50\u5cf0\uff1a\u5728\u6797\u9053\u7d42\u9ede\u4e0a\u65b9\uff0c\u6709\u5927\u5c0f\u96d9\u5cf0\uff0c\u7336\u5982\u6148\u6bcd\u5e36\u8457\u5c0f\u5b69\uff0c\u6545\u540d\u89aa\u5b50\u5cf0 \u597d\u5427\uff0c\u63b0 \u597d\u5427\uff0c\u4e0b\u6b21\u518d\u804a\u56c9\uff0c\u63b0 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "ES 201 108, Ref. RES/STQ-00018, STQ Aurora", "authors": [ { "first": "", "middle": [], "last": "Etsi Doc", "suffix": "" } ], "year": null, "venue": "DSR Front End", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ETSI Doc. No. ES 201 108, Ref. RES/STQ-00018, STQ Aurora, \"DSR Front End\".", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Front-End Extension for Tonal Language Recognition and Speech Reconstruction", "authors": [ { "first": "", "middle": [], "last": "Etsi Es ;", "suffix": "" }, { "first": "Aurora", "middle": [], "last": "Stq", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ETSI ES, Version 0.1.1, Ref. DES/STQ-00030, STQ Aurora, \"Front-End Extension for Tonal Language Recognition and Speech Reconstruction\".", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Business Week Magazine, International Cover Story", "authors": [ { "first": "S", "middle": [], "last": "Baker", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baker, S. , \"A Tale of A Bubble, \"Business Week Magazine, International Cover Story, June 3, 2002.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Who Needs 3G Anyway?", "authors": [ { "first": "A", "middle": [], "last": "Reinhardt", "suffix": "" }, { "first": "W", "middle": [], "last": "Echikson", "suffix": "" }, { "first": "K", "middle": [], "last": "Carlisle", "suffix": "" }, { "first": "P", "middle": [], "last": "Schmidt", "suffix": "" } ], "year": 2001, "venue": "Business Week Magazine, International -European Business", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhardt, A. , W. Echikson, K. Carlisle, P. Schmidt, \"Who Needs 3G Anyway? \" Business Week Magazine, International -European Business, March 26, 2001.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Humanization, Personalization and Authentication Issues in the Design of Interactive Service System", "authors": [ { "first": "H", "middle": [], "last": "Yamaguchi", "suffix": "" }, { "first": "K", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "C", "middle": [ "V" ], "last": "Ramamoorthy", "suffix": "" } ], "year": null, "venue": "2003 Society for Design and Process Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamaguchi, H. , K. Suzuki, C. V. Ramamoorthy, \"The Humanization, Personalization and Authentication Issues in the Design of Interactive Service System, \" 2003 Society for Design and Process Science, www.sdpsnet.org.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Recursive Estimation of Nonstationary Noise Using Iterative Stochastic Approximation for Robust Speech Recognition", "authors": [ { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "J", "middle": [], "last": "Droppo", "suffix": "" }, { "first": "A", "middle": [], "last": "Acero", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deng, L. , J. Droppo, and A. Acero, \"Recursive Estimation of Nonstationary Noise Using Iterative Stochastic Approximation for Robust Speech Recognition,\" in IEEE Transactions on Speech and Audio Processing. Volume: 11 Issue: 6 , Nov 2003.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Noise-Robust ASR Front-End Using Wiener Filter Constructed from MMSE Estimation of Clean Speech and Noise", "authors": [ { "first": "J", "middle": [], "last": "Wu", "suffix": "" }, { "first": "J", "middle": [], "last": "Droppo", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "A", "middle": [], "last": "Acero", "suffix": "" } ], "year": 2003, "venue": "Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding. Virgin Islands", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, J. , J. Droppo, L. Deng and A. Acero, \"A Noise-Robust ASR Front-End Using Wiener Filter Constructed from MMSE Estimation of Clean Speech and Noise,\" in Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding. Virgin Islands, Dec, 2003. Personalized and Humanized Wireless Devices", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On stochastic feature and model compensation approaches to robust speech recognition", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 1998, "venue": "Speech Communication", "volume": "25", "issue": "", "pages": "29--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, C. H. , \"On stochastic feature and model compensation approaches to robust speech recognition,\" Speech Communication, 25:29-47, 1998.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Quantization of Cepstral Parameters for Speecg Recognition Over the World Wide Web", "authors": [ { "first": "V", "middle": [], "last": "Digalakis", "suffix": "" }, { "first": "L", "middle": [], "last": "Neumeyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Perakakis", "suffix": "" } ], "year": 1999, "venue": "IEEE Journal on Selected Areas in Communications", "volume": "17", "issue": "", "pages": "82--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Digalakis, V. , L. Neumeyer and M. Perakakis, \"Quantization of Cepstral Parameters for Speecg Recognition Over the World Wide Web,\" IEEE Journal on Selected Areas in Communications, Jan. 1999, volume 17, pp 82-90.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A study on speaker adaptation of continuous density HMM parameters", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Lin", "suffix": "" }, { "first": "B", "middle": [ "H" ], "last": "Juang", "suffix": "" } ], "year": 1990, "venue": "Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, C. H. , C. H. Lin, and B. H. Juang, \"A study on speaker adaptation of continuous density HMM parameters,\" Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pages 145-148, Albuquerque, New Mexico, April 1990. ICASSP'90.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Eigenspace-based Maximum A Posteriori Linear Regression for Rapid Speaker Adaptation", "authors": [ { "first": "K", "middle": [ "T" ], "last": "Chen", "suffix": "" }, { "first": "Hsin-Min", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2001, "venue": "Proc. IEEE Int. Conf. Acoustics, Speech, Signal processing (ICASSP'2001)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K. T. and Hsin-min Wang, \"Eigenspace-based Maximum A Posteriori Linear Regression for Rapid Speaker Adaptation,\" in Proc. IEEE Int. Conf. Acoustics, Speech, Signal processing (ICASSP'2001), Salt Lake City, USA, May 2001.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Speaker identification using minimum classification error training", "authors": [ { "first": "O", "middle": [], "last": "Siohan", "suffix": "" }, { "first": "A", "middle": [ "E" ], "last": "Rosenberg", "suffix": "" }, { "first": "S", "middle": [], "last": "Parthasarathy", "suffix": "" } ], "year": 1998, "venue": "Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siohan, O. , A. E. Rosenberg and S. Parthasarathy, \"Speaker identification using minimum classification error training,\" In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Seattle, Washington, USA, May 1998.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Channel adaptation using linear regression for continuous noisy speech recognition", "authors": [ { "first": "O", "middle": [], "last": "Siohan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Gong", "suffix": "" }, { "first": "J", "middle": [ "P" ], "last": "Haton", "suffix": "" } ], "year": 1995, "venue": "IEEE Workshop on Automatic Speech Recognition", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siohan, O. , Y. Gong, and J. P. Haton, \"Channel adaptation using linear regression for continuous noisy speech recognition,\" IEEE Workshop on Automatic Speech Recognition, Snowbird, Utah, USA, December 1995.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Towards Non-Stationary Model-Based Noise Adaptation for Large Vocabulary", "authors": [ { "first": "T", "middle": [], "last": "Kristjansson", "suffix": "" }, { "first": "B", "middle": [], "last": "Frey", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "A", "middle": [], "last": "Acero", "suffix": "" } ], "year": 2001, "venue": "Proc. of the Int. Conf. on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristjansson, T. , B. Frey, L. Deng and A. Acero. \"Towards Non-Stationary Model-Based Noise Adaptation for Large Vocabulary,\" in Proc. of the Int. Conf. on Acoustics, Speech, and Signal Processing. Salt Lake City, Utah, May, 2001.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "A). Conventionally, speech recognition is simply one of the available input interfaces for portable devices. (B). Illustration of the innovative C-DSR platform.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "Modules of the C-DSR engine.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Diagram of the Dialogue System, DS.", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "Illustration of the C-DSR implementation.Each application uses its own configuration, specified according to (1) the device type: from thin to heavy, 8-bit 8051 class, DSP, PDA class; (2) the recognition style: command-based, natural, or dialogue; (3) the recognition environment: in a car, home, or open space. Two configuration files are presented below to illustrate how configuration files are used to realize a speech recognition application. Note that, normally, there are two types of speech recognition applications: voice command-based and dialogue style.[Example 1] Voice-controlled home appliances[Scenario] The user may use his wireless portable device with the installed C-DSR client to control a TV, lamp, or other home appliances within a WLAN environment.[Configuration Settings] 1. Speech Feature Compression Format: this may be PCM, 8051-class, LPC-based Cepstrum, or MFCC (Mel-Frequency Cepstral Coefficients), depending on the computational cost and communication bandwidth (bit-rates) of the client device.", "type_str": "figure", "uris": null, "num": null }, "FIGREF5": { "text": "The Number of Results: the user may configure the number of recognition results, say N. The C-DSR Client will then display the first most likely candidates to the user. 6. Recognition Style: this can be Command-based or Dialogue. The grammar format for the Command-based style uses the ABNF format shown in the following: [The Setup at the C-DSR Server] When the configuration file and speech data are received from the client, the C-DSR Server performs recognition tasks according to the configuration. In the grammar example shown above, exactly one keyword from the $action1 group (open/close) and $keyword1 (light/fan/tv/radio) group will be generated. The Action Center on the C-DSR Server will perform a corresponding action, such as \"turn on light.\" [Example No.2] Tourist Information Guide of Yu-Shan National Park", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "html": null, "text": "", "type_str": "table", "content": "
Attribute
", "num": null }, "TABREF2": { "html": null, "text": "Table 2above, each module has a well-defined interface and, for a particular module, several implementations are available. To each implementation, one CF name is attached, and it can be switched or configured. For instance, in the End Point Detection (EPD) module, there are three options, EPD_NONE, EPD_VFR and EPD_ENG, each representing a different algorithm used to implement the EPD function. The C-DSR platform also allows the system maintainer to adopt a new method for each module.", "type_str": "table", "content": "
Configurable ModuleParameter
Energy Normalization None / FRAME_ENG_NORM
Front-end filterFF_NONE / FF_LOW_PASS / FF_1POLE_1ZERO
Feature ExtractionFE_NONE / FE_MFCC / FE_LPC_CEP / FE_8051_CLASS
(if needed)
End Point Detection EP_NONE / EP_VFR / EP_ENG
Engine TypeEG_DIGITSTRING/EG_COMMAND/EG_KEYWORDSPOT/G_LVCSR
MeanSubtractionMS_NONE / MS_STD
Computing
HMM AdjustmentsHJ_NONE / HJ_PMC
HMM AdaptationsHP_NONE / HP_ADAPT_DEV / HP_ADAPT_SPKR
Viterbi SearchingVS_FULL_PATH / VS_NBEST / VS_BEAM
Post OperationsPO_NONE / PO_STD
As shown in
", "num": null }, "TABREF4": { "html": null, "text": "Speaking speed: the speaking speed of a normal person is around five words per", "type_str": "table", "content": "
3.
TalkingRobot
Doll
AIBO using C-DSRHome Appliances
PDAIrDa-ControlHome
Appliances
C-DSR ClientWLAN
GPRS
C-DSR ServerInternet/ IP Network
At Public Area
Mobile Stock/WeatherAtHome
Inquiry System
", "num": null } } } }