{ "paper_id": "O15-3000", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:10:24.526338Z" }, "title": "Computational Linguistics & Chinese Language Processing Aims and Scope", "authors": [ { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "hykao@mail.ncku.edu.tw" }, { "first": "Yih-Ru", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jen-Tzung", "middle": [], "last": "Chien", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yi-Chung", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Chao-Chun", "middle": [], "last": "Liang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Kuang-Yi", "middle": [], "last": "Hsu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Chien-Tsung", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Shen-Yun", "middle": [], "last": "Miao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Wei-Yun", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jung", "middle": [], "last": "Liau", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Churn-Jung", "middle": [], "last": "Liau", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Guan-Bin", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "gbchen@ikmlab.csie.ncku.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "autoencoder (SSAE) to reduce the dimensionality of the acoustic-prosodic features used in order to identify the key higher-level features. The Guest Editors of this special issue would like to thank all of the authors and reviewers for sharing their knowledge and experience at the conference. We hope this issue provide for directing and inspiring new pathways of NLP and spoken language research within the research field.", "pdf_parse": { "paper_id": "O15-3000", "_pdf_hash": "", "abstract": [ { "text": "autoencoder (SSAE) to reduce the dimensionality of the acoustic-prosodic features used in order to identify the key higher-level features. The Guest Editors of this special issue would like to thank all of the authors and reviewers for sharing their knowledge and experience at the conference. We hope this issue provide for directing and inspiring new pathways of NLP and spoken language research within the research field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "(Kai-Wun Shih), (Kuan-Yu Chen) , (Shih-Hung Liu) , (Hsin-Min Wang) , (Berlin Chen) [Investigating Modulation Spectrum Factorization Techniques for Robust Speech Recognition]\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026........ (Ting-Hao Chang) , (Hsiao-Tsung Hung) , (Kuan-Yu Chen) , (Hsin-Min Wang) ( Berlin Chen) [Automating Behavior Coding for Distressed Couples Interactions Based on Stacked Sparse Autoencoder Framework using Speech-acoustic Features] .", "cite_spans": [ { "start": 16, "end": 30, "text": "(Kuan-Yu Chen)", "ref_id": null }, { "start": 33, "end": 48, "text": "(Shih-Hung Liu)", "ref_id": null }, { "start": 51, "end": 66, "text": "(Hsin-Min Wang)", "ref_id": null }, { "start": 69, "end": 82, "text": "(Berlin Chen)", "ref_id": null }, { "start": 198, "end": 214, "text": "(Ting-Hao Chang)", "ref_id": null }, { "start": 217, "end": 235, "text": "(Hsiao-Tsung Hung)", "ref_id": null }, { "start": 238, "end": 252, "text": "(Kuan-Yu Chen)", "ref_id": null }, { "start": 255, "end": 270, "text": "(Hsin-Min Wang)", "ref_id": null }, { "start": 273, "end": 285, "text": "Berlin Chen)", "ref_id": null }, { "start": 402, "end": 427, "text": "Speech-acoustic Features]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(Po-Hsuan Chen) , (Chi-Chun Lee) Reviewers List & 2015 Index\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026..", "cite_spans": [ { "start": 10, "end": 15, "text": "Chen)", "ref_id": null }, { "start": 18, "end": 32, "text": "(Chi-Chun Lee)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Association for Computational Linguistics and Chinese Language Processing", "sec_num": null }, { "text": "The 27th Conference on Computational Linguistics and Speech Processing (ROCLING 2017) was held at National Chiao Tung University, Hsinchu, Taiwan on Oct. [1] [2] 2015 . ROCLING, which sponsored by the Association for Computational Linguistics and Chinese Language Processing (ACLCLP), is the leading and most comprehensive conference on computational linguistics and speech processing in Taiwan, bringing together researchers, scientists and industry participants from fields of computational linguistics, information understanding, and speech processing, to present their work and discuss recent trends in the field. This special issue presents extended and reviewed versions of six papers meticulously selected from ROCLING 2015, including 3 natural language processing papers and 3 speech processing papers.", "cite_spans": [ { "start": 154, "end": 157, "text": "[1]", "ref_id": "BIBREF106" }, { "start": 158, "end": 161, "text": "[2]", "ref_id": null }, { "start": 162, "end": 166, "text": "2015", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Forewords", "sec_num": null }, { "text": "The first two papers from Academia Sinica focused the math word problem solver. The fist one paper proposes a tag-based statistical framework to solve math word problems with understanding and reasoning. It analyzes the body and question texts into their associated tag-based logic forms, and then performs inference on them. The proposed statistical approach alleviates rules coverage and ambiguity resolution problems, and their tag-based approach also provides the flexibility of handling various kinds of related questions with the same body logic form. This paper is also awarded as the best paper of ROCLING 2015. The second paper proposes a math operation oriented approach to explain how the answers are obtained for math word problems. They adopt a specific template to generate the text for each kind of math operator. This is also the first explanation generation that is specifically tailored to the math word problems. The third paper from National Cheng Kung University focused the problem of the frequent bi-term in BTM. This paper proposed an improvement of word co-occurrence method to enhance the topic models. They apply the word co-occurrence information to the BTM. The experimental result that show the enhanced PMI-\u03b2-BTM gets better results in the both of regular short news title text and the noisy tweet text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forewords", "sec_num": null }, { "text": "The last three papers are spoken language processing papers. The first two of them are co-works from National Taiwan Normal University and Academia Sinica. The first one explores a novel use of both word and sentence representation techniques for extractive spoken document summarization. In this paper, three variants of sentence ranking models building on top of such representation techniques are also proposed. The second one attempts to obtain noise-robust speech features through modulation spectrum processing of the original speech features. They explore the use of nonnegative matrix factorization (NMF) and its extensions on the magnitude modulation spectra of speech features so as to distill the most important and noise-resistant information cues that can benefit the ASR performance. The last paper from Nation Tsing Hua University aims at using machine learning approach to automate the observations of human behaviors, and by using signal processing technique. This paper proposes to use stacked sparse", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forewords", "sec_num": null }, { "text": "Since Big Data mainly aims to explore the correlation between surface features but not their underlying causality relationship, the Big Mechanism 1 program was initiated by DARPA 2 Yi-Chung Lin et al. (from July 2014) to find out \"why\" behind the \"Big Data\". However, the pre-requisite for it is that the machine can read each document and learn its associated knowledge, which is the task of Machine Reading (MR) (Strassel et al., 2010) . Therefore, the Natural Language and Knowledge Processing Group, under the Institute of Information Science of Academia Sinica, formally launched a 3-year MR project (from January 2015) to attack this problem.", "cite_spans": [ { "start": 190, "end": 200, "text": "Lin et al.", "ref_id": null }, { "start": 414, "end": 437, "text": "(Strassel et al., 2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "As a domain-independent MR system is complicated and difficult to build, the math word problem (MWP) (Mukherjee & Garain, 2008) is chosen as the first task to study MR for the following reasons: (1) Since the answer for the MWP cannot be extracted by simply performing keyword matching (as Q&A usually does), MWP thus can act as a test-bed for understanding the text and then drawing the answer via inference. (2) MWP usually possesses less complicated syntax and requires less amount of domain knowledge. It can let the researcher focus on the task of understanding and reasoning, not on how to build a wide-coverage grammar and acquire domain knowledge. 3The body part of MWP (which mentions the given information for solving the problem) usually consists of only a few sentences. Therefore, the understanding and reasoning procedure could be checked more efficiently. 4The MWP solver could have its own standalone applications, such as computer tutor, etc. It is not just a toy test case.", "cite_spans": [ { "start": 101, "end": 127, "text": "(Mukherjee & Garain, 2008)", "ref_id": "BIBREF11" }, { "start": 195, "end": 198, "text": "(1)", "ref_id": "BIBREF106" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "According to the framework of making the decision while there are several candidates, previous MWP algebra solvers can be classified into: (1) Rule-based approaches with logic inference (Bobrow, 1964; Slagle, 1965; Charniak, 1968 Charniak, , 1969 Dellarosa, 1986; Bakman, 2007) , which apply rules to get the answer (via identifying entities, quantities, operations, etc.) with a logic inference engine. (2) Rule-based approaches without logic inference (Gelb, 1971; Ballard & Biermann, 1979; Biermann & Ballard, 1980; Biermann et al., 1982; Fletcher, 1985; Hosseini et al., 2014) , which apply rules to get the answer without a logic inference engine. 3Purely statistics-based approaches (Kushman et al., 2014; Roy et al., 2015) , which use statistical models to identify entities, quantities, operations, and get the answer without conducting language analysis or inference.", "cite_spans": [ { "start": 139, "end": 142, "text": "(1)", "ref_id": "BIBREF106" }, { "start": 186, "end": 200, "text": "(Bobrow, 1964;", "ref_id": null }, { "start": 201, "end": 214, "text": "Slagle, 1965;", "ref_id": null }, { "start": 215, "end": 229, "text": "Charniak, 1968", "ref_id": null }, { "start": 230, "end": 246, "text": "Charniak, , 1969", "ref_id": null }, { "start": 247, "end": 263, "text": "Dellarosa, 1986;", "ref_id": null }, { "start": 264, "end": 277, "text": "Bakman, 2007)", "ref_id": null }, { "start": 454, "end": 466, "text": "(Gelb, 1971;", "ref_id": null }, { "start": 467, "end": 492, "text": "Ballard & Biermann, 1979;", "ref_id": null }, { "start": 493, "end": 518, "text": "Biermann & Ballard, 1980;", "ref_id": null }, { "start": 519, "end": 541, "text": "Biermann et al., 1982;", "ref_id": null }, { "start": 542, "end": 557, "text": "Fletcher, 1985;", "ref_id": null }, { "start": 558, "end": 580, "text": "Hosseini et al., 2014)", "ref_id": null }, { "start": 689, "end": 711, "text": "(Kushman et al., 2014;", "ref_id": null }, { "start": 712, "end": 729, "text": "Roy et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The main problem of the rule-based approaches mentioned above is that the coverage rate problem is serious, as rules with wide coverage are difficult and expensive to construct. Also, it is awkward in resolving ambiguity problems. Besides, since they adopt Go/No-Go approach (unlike statistical approaches which can adopt a large Top-N to have high including rates), the error accumulation problem would be severe. On the other hand, the main problem of those approaches not adopting logic inference is that they usually need to implement a new handling procedure for each new type of problems (as the general logic inference mechanism is not adopted). Also, as there is no inference engine to generate the reasoning chain, additional effort would be required for generating the explanation. In contrast, the main problem of those purely statistical approaches is that they are sensitive to irrelevant Designing a with Reasoning and Explanation information (Hosseini et al., 2014) (as the problem is solved without first understanding the text). Also, the performance deteriorates significantly when they encounter complicated problems due to the same reason.", "cite_spans": [ { "start": 919, "end": 980, "text": "Reasoning and Explanation information (Hosseini et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To avoid the problems mentioned above, a tag-based statistical framework which is able to perform understanding and reasoning is proposed in this paper. For each body statement (which specifies the given information), the text will be first analyzed into its corresponding semantic tree (with its anaphora/ellipses resolved and semantic roles labeled), and then converted into its associated logic form (via a few mapping rules). The obtained logic form is then mapped into its corresponding domain dependent generic concepts (also expressed in logic form). The same process also goes for the question text (which specifies the desired answer). Finally, the inference (based on the question logic form) is performed on the logic statements derived from the body text. Please note that a statistical model will be applied each time when we have choices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Furthermore, to reply any kind of questions associated with the given information, we keep all related semantic roles (such as agent, patient, etc.) and associated specifiers (which restrict the given quantity, and is freely exchangeable with the term tag) in the logic form (such as verb(q1,\u9032\u8ca8), agent(q1,\u6587\u5177\u5e97), head(n1 p ,\u7b46), color(n1 p ,\u7d05), etc.), which are regarded as various tags (or conditions) for selecting the appropriate information related to the given question. Therefore, the proposed approach can be regarded as a tag-based statistical approach with logic inference. Since extra-linguistic knowledge would be required for bridging the gap between the linguistic semantic form and the desired logic form, we will extract the desired background knowledge (ontology) from E-HowNet (Chen et al., 2005) for verb-entailment.", "cite_spans": [ { "start": 792, "end": 811, "text": "(Chen et al., 2005)", "ref_id": "BIBREF66" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In comparison with those rule-based approaches, the proposed approach alleviates the ambiguity resolution problem (i.e., selecting the appropriate semantic tree, anaphora/co-reference, domain-dependent concepts, inference rules) via a statistical framework. Furthermore, our tag-based approach provides the flexibility of handling various kinds of possible questions with the same body logic form. On the other hand, in comparison with those purely statistical approaches, the proposed approach is more robust to the irrelevant information (Hosseini et al., 2014) and could provide the answer more precisely (as the semantic analysis and the tag-based logic inference are adopted). In addition, with the given reasoning chain, the explanation could be more easily generated. Last, since logic inference is a general problem solving mechanism, the proposed approach can solve various types of problems that the inference engine could handle (i.e., not only arithmetic or algebra as most approaches aim to).", "cite_spans": [ { "start": 540, "end": 563, "text": "(Hosseini et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The contributions of our work are: (1) Proposing a semantic composition form for abstracting the text meaning to perform semantic reasoning; (2) Proposing verb entailment via E-HowNet for bridging the lexical gap (Moldovan & Rus, 2001 ); (3) Proposing a tag-based logic representation to adopt one body logic form for handling various possible questions; (4) Proposing a set of domain dependent (for math algebra) generic concepts for solving MWP; (5) Proposing a statistical solution type classifier to indicate the way for solving MWP; (6) Proposing a semantic matching method for performing unification; 7Proposing a statistical framework for performing reasoning from the given text.", "cite_spans": [ { "start": 213, "end": 234, "text": "(Moldovan & Rus, 2001", "ref_id": null }, { "start": 355, "end": 358, "text": "(4)", "ref_id": "BIBREF108" }, { "start": 448, "end": 451, "text": "(5)", "ref_id": "BIBREF109" }, { "start": 538, "end": 541, "text": "(6)", "ref_id": "BIBREF110" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Since we will have various design options in implementing a math word problem solver, we need some guidelines to judge which option is better when there is a choice. Some principles are thus proposed as follows for this purpose:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Principles", "sec_num": "2." }, { "text": "(1) Solutions should be given via understanding and inference (versus the template matching approach proposed in (Kushman et al., 2014) , as the math word problem is just the first case for our text understanding project and we should be able to explain how the answer is obtained.", "cite_spans": [ { "start": 113, "end": 135, "text": "(Kushman et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Design Principles", "sec_num": "2." }, { "text": "(2) The expressiveness of the adopted body logical form should be powerful enough for handling various kinds of possible questions related to the body, which implies that logic form transformation should be information lossless. In other words, all the information carried by the semantic representation should be kept in the corresponding logical form. It also implies that the associated body logical form should be independent on the given question (as we don't know which question will be asked later).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Principles", "sec_num": "2." }, { "text": "(3) The dynamically constructed knowledge should not favor any specific kind of problem/question. This principle suggests that the Inference Engine (IE) should regard logic statements as a flat list, instead of adopting a pre-specified hierarchical structure (e.g., the container adopted in (Hosseini et al., 2014) , which is tailored to some kinds of problems/questions). Any desired information will be located from the list via the same mechanism according to the specified conditions. 4The Logic Form Converter (LFC) should be compositional (Moldovan & Rus, 2001 ) after giving co-reference and solution type 2 , which implies that each sub-tree (or nonterminal node) should be independently transformed regardless of other nodes not under it, and the logic form of a given nonterminal node is formed by concatenating the corresponding logic forms of its associated child-nodes. (5) The IE should only deal with domain dependent generic concepts (instead of complicated 2 Solution Type specifies the desired mathematic utility/operation that LFC should adopt (see Section", "cite_spans": [ { "start": 291, "end": 314, "text": "(Hosseini et al., 2014)", "ref_id": null }, { "start": 545, "end": 566, "text": "(Moldovan & Rus, 2001", "ref_id": null }, { "start": 883, "end": 886, "text": "(5)", "ref_id": "BIBREF109" } ], "ref_spans": [], "eq_spans": [], "section": "Design Principles", "sec_num": "2." }, { "text": "Designing a with Reasoning and Explanation problem dependent concepts); otherwise, it would be too tedious. Take the problem \"100 \u9846\u7cd6\u88dd\u6210 5 \u76d2\u7cd6, 1 \u76d2\u7cd6\u88dd\u5e7e\u9846\u7cd6? (If 100 candies are packed into 5 boxes, how many candies are there in a box?)\" as an example. Instead of using a problem-dependent First Order Logic (FOL) predicate like \"\u88dd\u6210(100,\u9846,\u7cd6,5,\u76d2,\u7cd6)\", the problem-independent FOL functions/predicates like \"quan(q1, \u9846 , \u7cd6 ) = 100\", \"quan(q2, \u76d2 , \u7cd6 ) = 5\", \"qmap(m1,q1,q2)\", and \"verb(m1,\u88dd\u6210)\" are adopted to represent the facts provided by problem description 3 . 6The LFC should know the global skeleton of the whole given text (which is implicitly implied by the associated semantic segments linked via the given co-reference information) to achieve a reasonable balance between it and the IE. 7The IE should separate the knowledge from the reasoning procedures to ease porting, which denotes that those domain dependent concepts and inference rules should be kept in a declarative form (and could be imported from some separated files); and the inference rules should not be a part of the IE's source code. The block diagram of the proposed MWP solver is shown in Figure 1 . First, every sentence in the MWP, including both body text and the question text, is analyzed by the Language Analysis module, which transforms each sentence into its corresponding Semantic Representation (SR) tree. The sequence of SR trees is then sent to the Problem Resolution module, which adopts logic inference approach to obtain the answer for each question. Finally,", "cite_spans": [ { "start": 550, "end": 551, "text": "3", "ref_id": "BIBREF107" } ], "ref_spans": [ { "start": 1157, "end": 1165, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "for details).", "sec_num": "3.3" }, { "text": "Yi-Chung Lin et al. the Explanation Generation module will explain how the answer is obtained (in natural language text) according to the given reasoning chain.", "cite_spans": [ { "start": 9, "end": 19, "text": "Lin et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "As the figure depicted, the Problem Resolution module in our system consists of three components: Solution Type Classifier (STC), LFC and IE. The STC suggests a scenario to solve the problem for every question in an MWP. In order to perform logic inference, the LFC first extracts the related facts from the given SR tree and then represents them as FOL predicates/functions (Russell & Norvig, 2009) . It also transforms each question into an FOL-like utility function according to the assigned solution type. Finally, according to inference rules, the IE derives new facts from the old ones provided by the LFC. Besides, it is also responsible for providing utilities to perform math operations on related facts.", "cite_spans": [ { "start": 375, "end": 399, "text": "(Russell & Norvig, 2009)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The entities (like noun phrases) or events (like verb phrases) described in the given sentence may be associated with modifiers, which usually restrict the scope (or specify the property) of the entities/events that they are associated. Since the system does not know which kind of questions will be asked when it reads the body sentences, modifiers should be also included in logic expressions (act as specifiers) and involved in binding. Therefore, the reification technique (Jurafsky & Martin, 2000) is employed to map the nonterminals in the given semantic tree, including verb phrases and noun phrases, into quantified objects which can be related to other objects via specified relations. For example, the logic form of the noun phrase \"\u7d05\u7b46(red pens)\" would be \"color(n1,\u7d05)&head(n1,\u7b46)\", where \"n1\" is an identified object referring to the noun phrase. Usually, the specifiers in the Body Logic Form (BLF) are optional in Question Logic Form (QLF), as the body might contain irrelevant text. On the contrary, the specifiers in the QLF are NOT optional (at least in principle) in BLF (i.e., the same (or corresponding) specifier must exist in BLF). This restriction is important as we want to make sure that each argument (which will act as a filtering-condition) in the QLF will be exactly matched to keep irrelevant facts away during the inference procedure.", "cite_spans": [ { "start": 477, "end": 502, "text": "(Jurafsky & Martin, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Take the MWP \"\u6587\u5177\u5e97\u9032\u8ca8 2361 \u679d\u7d05\u7b46\u548c 1587 \u679d\u85cd\u7b46(A stationer bought 2361 red pens and 1587 blue pens), \u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7b46(How many pens did the stationer buy)?\" as an example. The STC will assign the operation type \"Sum\" to it. The LFC will extract the following facts from the first sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "quan(q1,\u679d,n1 p )=2361&verb(q1,\u9032\u8ca8)&agent(q1,\u6587\u5177\u5e97)&head(n1 p ,\u7b46)&color(n1 p ,\u7d05) quan(q2,\u679d,n2 p )=1587&verb(q2,\u9032\u8ca8)&agent(q2,\u6587\u5177\u5e97)&head(n2 p ,\u7b46)&color(n2 p ,\u85cd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Designing a Tag-Based Statistical Math Word Problem Solver", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The quantity-fact \"2361 \u679d\u7d05\u7b46(2361 red pens)\" is represented by \"quan(q1,\u679d,n1 p )=2361\", where the argument \"n1 p \" 4 denotes \"\u7d05\u7b46(red pens)\" due to the facts \"head(n1 p ,\u7b46)\" and \"color(n1 p , \u7d05 )\". Also, those specifiers \"verb(q1, \u9032 \u8ca8 )&agent(q1, \u6587 \u5177 \u5e97 )&head(n1 p , \u7b46)&color(n1 p ,\u7d05)\" are regarded as various tags which will act as different conditions for selecting the appropriate information related to the question specified later. Likewise, the quantity-fact \"1587 \u679d\u85cd\u7b46(1587 blue pens)\" is represented by \"quan(q2,\u679d,n2 p )=1587\". The LFC also issues the utility call \"ASK Sum(quan(?q,\u679d,\u7b46),verb(?q,\u9032\u8ca8)&agent(?q,\u6587\u5177\u5e97))\" (based on the assigned solution type) for the question. Finally, the IE will select out two quantity-facts \"quan(q1, \u679d ,n1 p )=2361\" and \"quan(q2, \u679d ,n2 p )=1587\", and then perform \"Sum\" operation on them to obtain \"3948\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "with Reasoning and Explanation", "sec_num": "7" }, { "text": "If the question in the above example is \"\u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7d05\u7b46(How many red pens did the stationer buy)?\", the LFC will generate the following facts and utility call for this new question:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "with Reasoning and Explanation", "sec_num": "7" }, { "text": "head(n3 p ,\u7b46)&color(n3 p ,\u7d05) ASK Sum(quan(?q,\u679d,n3 p ),verb(?q,\u9032\u8ca8)&agent(?q,\u6587\u5177\u5e97))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "with Reasoning and Explanation", "sec_num": "7" }, { "text": "As the result, the IE will only select the quantity-fact \"quan(q1,\u679d,n1 p )=2361\", because the specifier in QLF (i.e., \"color(n3 p ,\u7d05)\") cannot match the associated specifier \"\u85cd(blue)\" (i.e., \"color(n2 p ,\u85cd)\") of \"quan(q2,\u679d,n2 p )=1587\". After performing \"Sum\" operation on it, we thus obtain the answer \"2361\". Each module will be described in detail as follows (We will skip Explanation Generation due to space limitation. Please refer to (Huang et al., 2015) for the details).", "cite_spans": [ { "start": 440, "end": 460, "text": "(Huang et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "with Reasoning and Explanation", "sec_num": "7" }, { "text": "Since the Chinese sentence is a string of characters with no delimiters to mark word boundaries, the first step for analyzing the MWP text is to segment each given sentence string into its corresponding word sequence. Our Chinese word segmentation system (Chen & Ma, 2002; Ma & Chen, 2003) adopts a modularized approach. Independent modules were designed to solve the problems of segmentation ambiguities and identifying unknown words. Segmentation ambiguities are resolved by a hybrid method of using heuristic and statistical rules. Regular-type unknown words are identified by associated regular expressions, and irregular types of unknown words are detected first by their occurrence and then extracted by morphological rules with statistical and morphological constraints. Part-of-Speech tagging is also included in the segmentation system for both known and unknown words by using HMM models and morphological rules. Please refer to (Tseng & Chen, 2002; Tsai & Chen, 2004) for the details.", "cite_spans": [ { "start": 255, "end": 272, "text": "(Chen & Ma, 2002;", "ref_id": null }, { "start": 273, "end": 289, "text": "Ma & Chen, 2003)", "ref_id": null }, { "start": 939, "end": 959, "text": "(Tseng & Chen, 2002;", "ref_id": null }, { "start": 960, "end": 978, "text": "Tsai & Chen, 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Language Analysis (Jurafsky & Martin, 2000)", "sec_num": "3.1" }, { "text": "In order to design a high precision and broad coverage Chinese parser, we had constructed a Chinese grammar via generalizing and specializing the grammar extracted from Sinica Treebank (Hsieh et al., 2013; Hsieh et al., 2014) to achieve this goal. The designed F-PCFG (Feature-embedded Probabilistic Context-free Grammar) parser was based on the probabilities of the grammar rules. It evaluates the plausibility of each syntactic structure to resolve parsing ambiguities. We refine the probability estimation of a syntactic tree (for tree-structure disambiguation) by incorporating word-to-word association strengths. The word-to-word association strengths were self-learned from parsing the CKIP corpus (Hsieh et al., 2007) . A semantic-role assignment capability is also incorporated into the system.", "cite_spans": [ { "start": 185, "end": 205, "text": "(Hsieh et al., 2013;", "ref_id": null }, { "start": 206, "end": 225, "text": "Hsieh et al., 2014)", "ref_id": null }, { "start": 704, "end": 724, "text": "(Hsieh et al., 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Language Analysis (Jurafsky & Martin, 2000)", "sec_num": "3.1" }, { "text": "Once the syntactic structure (with semantic roles) for a sentence is obtained, its semantic representation can be further derived through a process of semantic composition (from lexical senses) and achieved near-canonical representations. To represent lexical senses, we had implemented a universal concept-representation mechanism, called E-HowNet (Chen et al., 2005; Huang et al., 2014) . It is a frame-based entity-relation model where word senses are expressed by both primitives (or well-defined senses) and their semantic relations. We utilize E-HowNet to disambiguate word senses by referencing its ontology and the related synsets of the target words.", "cite_spans": [ { "start": 349, "end": 368, "text": "(Chen et al., 2005;", "ref_id": "BIBREF66" }, { "start": 369, "end": 388, "text": "Huang et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Composition", "sec_num": "3.1.1" }, { "text": "To solve math word problems, it is crucial to know who or what entity is being talked about in the descriptions of problems. This task is called reference resolution, and it can be classified into two types -anaphora resolution and co-reference resolution. Anaphora resolution is the task of finding the antecedent for a single pronoun while co-reference is the task of finding referring expressions (within the problem description) that refer to the same entity. We attack these two types of resolution mainly based on assessing whether a target pronoun/entity coincides its referent candidate in E-HowNet definition. For example, the definition of \"\u5979 (she)\" is \"{3rdPerson|\u4ed6\u4eba:gender={female|\u5973 }}\". Therefore, it would restrict that the valid referent candidates must be a female human, and result in a much fewer number of candidates for further consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Composition", "sec_num": "3.1.1" }, { "text": "In the following example, the semantic composition, anaphora resolution and co-reference resolution are shown in the 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Composition", "sec_num": "3.1.1" }, { "text": "quantifier={\uff16\uff12\u5f35(3)} } } \u5c0f\u8c6a(1): {human|\u4eba:name={\"\u5c0f\u8c6a\"}} \u6709(2): {own|\u6709} \uff16\uff12\u5f35(3): quantifier={\u5f35.null|\u7121 \u7fa9:quantity={62}} \u8cbc\u7d19(4): {paper|\u7d19\u5f35: qualification ={sticky|\u9ecf}} {\u7d66(3): agent={\u54e5\u54e5(1)}, time={\u518d(2)}, goal={[x1]\u4ed6(4)}, theme={\u8cbc\u7d19(5.1): quantifier={\uff15\uff16\u5f35(5)} } } \u54e5\u54e5(1): {\u54e5\u54e5|ElderBrother} \u518d(2): frequency={again|\u518d} \u7d66(3): {give|\u7d66} \u4ed6(4): {3rdPerson|\u4ed6\u4eba} \uff15\uff16\u5f35(5): quantifier={\u5f35.null|\u7121 \u7fa9:quantity={56}} \u8cbc\u7d19(5.1): {paper|\u7d19 \u5f35:qualification={sticky|\u9ecf}} {\u6709(4): theme={[x1]\u5c0f\u8c6a(1)}, time={\u73fe\u5728(2)}, quantity={\u5171(3)}, range={\u8cbc\u7d19(6): quantifier={\u5e7e\u5f35(5)} } } \u5c0f\u8c6a(1): {human|\u4eba:name={\"\u5c0f\u8c6a \"}} \u73fe\u5728(2): {present|\u73fe\u5728} \u5171(3): {all|\u5168} \u6709(4): {own|\u6709} \u5e7e\u5f35(5): quantifier={\u5f35.null|\u7121\u7fa9: \u5e7e.quantity={Ques|\u7591\u554f}} \u8cbc\u7d19(6): {paper|\u7d19 \u5f35:qualification={sticky|\u9ecf}}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Composition", "sec_num": "3.1.1" }, { "text": "We use numbers following words to represent words'positions in a sentence. For instance, \"\u6709(2)\" is the second word in the first sentence. The semantic representation uses a near-canonical representation form, where semantic role labels, such as \"agent\", \"theme\" and \"range\", are marked on each word, and each word is identified with its sense, such as \"\u6709(2): {own|\u6709}\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Composition", "sec_num": "3.1.1" }, { "text": "The co-referents of all sentences in a math problem are marked with the same \"x[#]\". For example, we mark the proper noun \"\u5c0f\u8c6a(1)\" with \"[x1]\" to co-refer with the pronoun \"\u4ed6(4)\" and the second occurrence of the proper noun \"\u5c0f\u8c6a(1)\". In the second sentence of the example, the head of the quantifier \"\uff15\uff16\u5f35\" is omitted in the text but it is recovered in the semantic representation and annotated with a decimal point in its word position. The missing head is recovered as \"\u8cbc\u7d19(5.1)\", which is an extra word with its constructed position based on decimal point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Composition", "sec_num": "3.1.1" }, { "text": "However, even we know what the given math word problem means, we still might not know how to solve it if we have not been taught for solving the same type of problems in a math class before (i.e., without enough math training/background). Therefore, we need to collect various types of math operations (e.g., addition, subtraction, multiplication, division, sum, etc.), aggregative operations (e.g., Comparison, Set-Operation, etc.) and specific problem types (e.g., Algebra, G.C.D., L.C.M., etc.) that have been taught in the math class. And the LFC needs to know which math operation, aggregative operation or specific problem type should be adopted to solve the given problem. Therefore, we need to map the given semantic representation to a specific problem type. However, this mapping is frequently decided based on the global information across various input sentences (even across body text and question text). Without giving the corresponding mathematic utility/operation, the logic form transformation would be very complicated. A Solution Type Classifier (STC) is thus proposed to decide the desired utility/operation that LFC should adopt (i.e., to perform the mapping).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution Type Identification", "sec_num": "3.2" }, { "text": "Currently, 16 different solution types are specified (in Table 1 ; most of them are self-explained with their names) to cover a wide variety of questions found in our elementary math word corpus. They are listed according to their frequencies found in 75 manually labeled questions. The STC is similar to the Question Type Classifier commonly adopted at Q&A (Loni, 2011). For mathematic operation type, it will judge which top-level math operation is expected (based on the equation used to get the final answer). For example, if the associated equation is \"Answer = q1 -(q2 \u00d7 q3)\", then \"Subtraction\" will be the assigned math operation type, which matches human reasoning closely.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 1", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Solution Type Identification", "sec_num": "3.2" }, { "text": "with frequency in the training set (75 questions in total).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. Various solution types for solving elementary school math word problems", "sec_num": null }, { "text": "Multiply (24%) Utility (6%) Surplus (4%) L.C.M (2%) Sum (14%) Algebra (5%) Difference (4%) G.C.D (2%) Subtraction (12%) Comparison (5%) Ceil-Division (3%) Addition (1%) Floor-Division (7%) Ratio (5%) Common-Division (3%) Set-Operation (1%)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. Various solution types for solving elementary school math word problems", "sec_num": null }, { "text": "Take the following math word problem as an example, \"\u4e00\u8258\u8f2a\u8239 20 \u5206\u9418\u53ef\u4ee5\u884c\u99db 25 \u516c\u91cc(A boat sails 25 kilometers in 20 minutes)\uff0c 2.5 \u5c0f\u6642\u53ef\u4ee5\u884c\u99db\u591a\u5c11\u516c\u91cc(How far can it sail in 2.5 hours)\uff1f\". Its associated equation is \"Answer = 150 \u00d7 (25\u00f720)\". Therefore, the top-level operation is \"Multiplication\", and it will be the assigned solution type for this example. However, for the problem \"\u67d0\u6578\u4e58\u4ee5 11(Multiply a number with 11)\uff0c \u518d\u9664\u4ee5 4 \u7684 \u7b54\u6848\u662f 22(then divide it by 4. The answer is 22)\uff0c \u67d0\u6578\u662f\u591a\u5c11(What is the number)\uff1f\", its", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1. Various solution types for solving elementary school math word problems", "sec_num": null }, { "text": "Designing a with Reasoning and Explanation associated equation is \"Answer\u00d711\u00f74 = 22\"; since there is no specific natural top-level operation, the \"Algebra\" solution type will be assigned 5 .", "cite_spans": [ { "start": 187, "end": 188, "text": "5", "ref_id": "BIBREF109" } ], "ref_spans": [], "eq_spans": [], "section": "Table 1. Various solution types for solving elementary school math word problems", "sec_num": null }, { "text": "The STC will check the SR trees from both the body and the question to make the decision. Therefore, it provides a kind of global decision, and the LFC will perform logic transformation based on it (i.e., the statistical model of the LFC is formulated to condition on the solution type). Currently, a SVM classifier with linear kernel functions (Chang & Lin, 2011) is used, and it adopted four different kinds of feature-sets: (1) all word unigrams in the text, (2) head word of each nonterminal (inspired by the analogous feature adopted in (Huang et al., 2008) for question classification), (3) E-HowNet semantic features, and 4pattern-matching indicators (currently, patterns/rules are manually created).", "cite_spans": [ { "start": 345, "end": 364, "text": "(Chang & Lin, 2011)", "ref_id": null }, { "start": 542, "end": 562, "text": "(Huang et al., 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 1. Various solution types for solving elementary school math word problems", "sec_num": null }, { "text": "A two-stage approach is adopted to transform the SR tree of an input sentence to its corresponding logic forms. In the first stage, the syntactic/semantic relations between the words are deterministically transformed into their domain-independent logic forms. Afterwards, crucial generic math facts and the possible math operations are non-deterministically generated (as domain-dependent logic forms) in the second stage. Basically, logic forms are expressed with the first-order logic (FOL) formalism (Russell & Norvig, 2009) In the first stage, FOL predicates are generated by traversing the input SR tree which mainly depicts the syntactic/semantic relations between its words (with associated word-senses). For example, the SR tree of the sentence \"100 \u9846\u7cd6\u88dd\u6210 5 \u76d2(If 100 candies are packed into 5 boxes)\" is shown as follows: {\u88dd\u6210(t1); theme={\u7cd6(t2); quantity=100(t3); unit=\u9846(t4)}; result={\u7cd6(t5); quantity=5(t6); unit=\u76d2(t7)} } Where \"theme\" and \"result\" are semantic roles, and information within brace are their associated attributes. Also, the symbols within parentheses are the identities of the terminals in the SR tree. Note that the terminal t5 is created via zero anaphora resolution in the language analysis phase. The above SR tree is transformed into the following FOL predicates separated by the logic AND operator &. verb(v1, t1) The above FOL predicates are also called logic-form-1 (LF1) facts. The predicate names of LF1 facts are just the domain-independent syntactic/semantic roles of the constituents in a sub-tree. Therefore, the LF1 facts are also domain-independent.", "cite_spans": [ { "start": 503, "end": 527, "text": "(Russell & Norvig, 2009)", "ref_id": "BIBREF16" }, { "start": 1314, "end": 1338, "text": "AND operator &. verb(v1,", "ref_id": null }, { "start": 1339, "end": 1342, "text": "t1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Logic Form Transformation", "sec_num": "3.3" }, { "text": "The domain-dependent logic-form-2 (LF2) facts are generated in the second stage. The LF2 facts are derived from some crucial generic math facts associated with quantities and relations between quantities. The FOL function \"quan(quan_id, unit_id, object_id) = number\" is used to describe the facts about quantities. The first argument is a unique identity to represent this quantity-fact. The other arguments and the function value describe the meaning of this fact. For example, \"qaun(q1,\u9846,\u7cd6) = 100\" means \"100 \u9846\u7cd6(100 candies)\" and \"qaun(q2, \u76d2 , \u7cd6 ) = 5\" means \"5 \u76d2 \u7cd6 (five boxes of candies)\". The FOL predicate \"qmap(map_id, quan_id 1 , quan_id 2 )\" (denotes the mapping from quan_id 1 to quan_id 2 ) is used to describe a relation between two quantity-facts, where the first argument is a unique identity to represent this relation. For example, \"qmap(m1, q1, q2)\" indicates that there is a relation between \"100 \u9846\u7cd6\" and \"5 \u76d2\u7cd6\". Now, LF2 facts are transformed by rules with a predefined set of lexico-semantic patterns as conditions. When more cases are exploited, a nondeterministic approach would be required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Form Transformation", "sec_num": "3.3" }, { "text": "In additional to domain-dependent facts like \"quan(\u2026)\" and \"qmap(\u2026)\", some auxiliary domain-independent facts associated with quan_id and map_id are also created in this stage to help the IE find the solution. The auxiliary facts of the quan_id are created by 4 steps: First, locate the nonterminal (said n q ) which quan_id is coming from. Second, traverse upward from n q to find the nearest nonterminal (said n v ) which directly connects to a verb terminal. Third, duplicate all LF1 facts whose first arguments are n v , except the one whose second argument is n q . Finally, replace the first arguments of the duplicated facts with quan_id. In the above Designing a with Reasoning and Explanation example, for the quantity-fact q1, n q is n1 and n v is v1 in the first and second steps. \"verb(v1, \u88dd\u6210)\" and \"result(v1, n2)\" will be copied at the third step. Finally, \"verb(q1, \u88dd\u6210)\" and \"result(q1, n2)\" are created. Likewise, \"verb(q2, \u88dd\u6210)\" and \"theme(q2, n1)\" are created for q2. The auxiliary facts of \"qmap(map_id, quan_id 1 , quan_id 2 )\" are created by copying all facts of the forms \"verb(quan_id 1 , *)\" and \"verb(quan_id 2 , *)\" (where \"*\" is a wildcard), and then replace all the first arguments of the copied facts with map_id. So, \"verb(m1, \u88dd\u6210)\" is created for m1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Form Transformation", "sec_num": "3.3" }, { "text": "Sometimes, the third argument of a quantity-fact (i.e., object_id) is a pseudo nonterminal identity created in the second stage. For example, the LF1 facts of the phrase \"2361 \u679d\u7d05\u7b46 (2361 red pens)\" are \"quantity(n1, 2361)\", \"unit(n1, \u679d)\", \"color(n1, \u7d05)\" and \"head(n1, \u7b46)\", where n1 is the nonterminal identity of the phrase. A pseudo nonterminal identity, said n1 p , is created to carry the terminals \"\u7d05(red)\" and \"\u7b46(pen)\" so that the quantity-fact \"2361 \u679d\u7d05\u7b46(2361 red pens)\" can be expressed as \"quan(q1, \u679d, n1 p ) = 2361\". The subscript \"p\" in n1 p indicates that n1 p is a pseudo nonterminal derived from the n1. To express that fact that n1 p carries the terminals \"\u7d05(red)\" and \"\u7b46(pen)\", two auxiliary facts \"color(n1 p , \u7d05)\" and \"head(n1 p , \u7b46)\" are also generated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Form Transformation", "sec_num": "3.3" }, { "text": "The questions in an MWP are transformed into FOL-like utility functions provided by the IE. One utility function is issued for each question to find the answer. For example, the question \"\u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7b46(How many pens did the stationer buy)\" is converted into \"ASK Sum(quan(?q,\u679d,\u7b46), verb(?q,\u9032\u8ca8)&agent(?q,\u6587\u5177\u5e97))\". This conversion is completed by two steps. First, select an IE utility (e.g., \"Sum(\u2026)\") to be called. Since the solution type of the question is \"Sum\", the IE utility \"Sum(function, condition) = value\" is selected. Second, instantiate the arguments of the selected IE utility. In this case, the first argument function is set to \"quan(?q, \u679d, \u7b46)\" because an unknown quantity fact is detected in the phrase \"\u5e7e\u679d\u7b46 (how many pens)\". Let the FOL variable \"?q\" play the role of quan_id in the steps of finding the auxiliary facts. The auxiliary facts \"verb(?q, \u9032\u8ca8)\" and \"agent(?q, \u6587\u5177\u5e97)\" are obtained to compose the second argument condition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logic Form Transformation", "sec_num": "3.3" }, { "text": "To sum up, the LFC transforms the semantic representation obtained by language analysis into domain dependent FOL expressions on which inference can be performed. In contrast, most researches of semantic parsing (Jurcicek et al., 2009; Das et al., 2014; Berant et al., 2013; Allen, 2014) seek to directly map the input text into the corresponding logic form. Therefore, across sentences deep analysis of the input text (e.g., anaphora and co-reference resolution) cannot be handled. The proposed two-stage approach (i.e., language analysis and then logic form transformation) thus provides the freedom to enhance the system capability for handling complicated problems which require deep semantic analysis.", "cite_spans": [ { "start": 212, "end": 235, "text": "(Jurcicek et al., 2009;", "ref_id": null }, { "start": 236, "end": 253, "text": "Das et al., 2014;", "ref_id": null }, { "start": 254, "end": 274, "text": "Berant et al., 2013;", "ref_id": null }, { "start": 275, "end": 287, "text": "Allen, 2014)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Logic Form Transformation", "sec_num": "3.3" }, { "text": "In our design, an IE is used to find the solution for an MWP. It is responsible for providing utilities to select desired facts and then obtaining the answer by taking math operations on those selected facts. In addition, it is also responsible for using inference rules to derive new facts from the facts directly provided from the description of the MWP. Facts and inference rules are represented in first-order logic (FOL) (Russell & Norvig, 2009) .", "cite_spans": [ { "start": 426, "end": 450, "text": "(Russell & Norvig, 2009)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "In some simple cases, the desired answer can be calculated from the facts directly derived from the MWP. For those cases, the IE only needs to provide a utility function to calculate the answer. In the example of Figure 2 , quantities 300, 600, 186 and 234 are mentioned in the MWP. The LFC transforms the question into \"ASK Sum(quan(?q,\u6735,\u767e\u5408), verb(?q,\u8ce3\u51fa)&agent(?q,\u82b1\u5e97)\" to ask the IE to find the answer, where \"Sum(\u2026)\" is a utility function provided by the IE. The first argument of \"Sum(\u2026)\" is an FOL function to indicate which facts should be selected. In this case, the unification procedure of the IE will successfully unify the first argument \"quan(?q, \u6735, \u767e\u5408)\" with three facts \"quan(q2, \u6735, \u767e \u5408)\", \"quan(q3, \u6735, \u767e\u5408)\" and \"quan(q4, \u6735, \u767e\u5408)\". When unifying \"quan(?q, \u6735, \u767e\u5408)\" with \"quan(q2, \u6735, \u767e\u5408)\", the FOL variable \"?q\" will be bound/substituted with q2. The second argument of \"Sum(\u2026)\" (i.e., \"verb(?q,\u8ce3\u51fa)&agent(?q,\u82b1\u5e97)\") is the condition to be satisfied. Since \"quan(q2, \u6735, \u767e\u5408)\" is rejected by the given condition, \"Sum(\u2026)\" will sum the values of the remaining facts (i.e., q3 and q4) to obtain the desired answer \"420\".", "cite_spans": [], "ref_spans": [ { "start": 213, "end": 221, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "\u82b1\u5e97\u9032\u8ca8 300 \u6735\u73ab\u7470\u548c 600 \u6735\u767e\u5408(A flower store bought 300 roses and 600 lilies ), \u4e0a\u5348\u8ce3\u51fa 186 \u6735\u767e\u5408(It sold 186 lilies in the morning) \uff0c\u4e0b\u5348\u8ce3\u51fa 234 \u6735(It sold 234 lilies in the afternoon)\uff0c\u554f\u82b1\u5e97\u5171\u8ce3\u51fa\u5e7e\u6735\u767e\u5408(How many lilies did the flower store sell)? \"value 1 \u2212value 2 \" and \"value 1 \u00d7value 2 \" respectively. Difference returns the absolute value of Subtraction. CommonDiv returns the value of \"value 1 \u00f7value 2 \". FloorDiv returns the largest integer value not greater than \"value 1 \u00f7value 2 \" and CeilDiv returns the smallest integer value not less than \"value 1 \u00f7value 2 \". Surplus returns the remainder after division of value 1 by value 2. Figure 3 , the MWP provides the facts that \"\u7238\u7238(Papa)\" bought something but it does not provide any facts associated to the money that \"\u7238\u7238(Papa)\" must pay. As a result, we are not able to obtain the answer from the question logic form \"Sum(quan(?q,\u5143,#), verb(?q,\u4ed8)&agent(?q,\u7238\u7238))\". However, it is common sense that people must pay some money to buy something. The following inference rule implements this common-sense implication.", "cite_spans": [], "ref_spans": [ { "start": 618, "end": 626, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "quan(q1,\u6735,\u73ab\u7470)=300&verb(q1,\u9032\u8ca8)&agent(q1,\u82b1\u5e97)&\u2026 quan(q2,\u6735,\u767e\u5408)=600&verb(q2,\u9032\u8ca8)&agent(q2,\u82b1\u5e97)&\u2026 quan(q3,\u6735,\u767e\u5408)=186&verb(q3,\u8ce3\u51fa)&agent(q3,\u82b1\u5e97)&\u2026 quan(q4,\u6735,\u767e\u5408)=234&verb(q4,\u8ce3\u51fa)&agent(q4,\u82b1\u5e97)&\u2026 ASK Sum(quan(?q,\u6735,\u767e\u5408), verb(?q,\u8ce3\u51fa)&agent(?q,\u82b1\u5e97))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "quan (?q,?u,?o) ", "cite_spans": [ { "start": 5, "end": 15, "text": "(?q,?u,?o)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "\u2192quan($q,\u5143,#)=quan(?q,?u,?o)\u00d7?p&verb($q,\u4ed8)&agent($q,?a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "In the above implication inference rule, \"quan(?q,?u,?o)&\u2026&price(?o,?p)\" is the premise of the rule and \"quan($q,\u5143,#)=\u2026&agent($q,?a)\" is the consequence of the rule. Please note that \"$q\" indicates a unique ID generated by the IE. \u7238\u7238\u8cb7\u4e86 3 \u672c 329 \u5143\u7684\u6545\u4e8b\u66f8\u548c 2 \u679d 465 \u5143\u7684\u92fc\u7b46(Papa bought three $329 books and two $465 pens)\uff0c\u7238\u7238\u5171\u8981\u4ed8\u5e7e\u5143(How much money did Papa pay)? After unifying this inference rule with the facts in Figure 3 , we can get two possible bindings (for q1 and q2, respectively). The following shows the binding of q1.", "cite_spans": [], "ref_spans": [ { "start": 403, "end": 411, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "quan(q1,\u672c,n1 p )=3&verb(q1,\u8cb7)&agent(q1,\u7238\u7238)&head(n1 p ,\u6545\u4e8b\u66f8)&price(n1 p ,329) quan(q2,\u679d,n2 p )=2&verb(q2,\u8cb7)&agent(q2,\u7238\u7238)&head(n2 p ,\u92fc\u7b46)&price(n2 p ,465) ASK Sum(quan(?q,\u5143,#),verb(?q,\u4ed8)&agent(?q,\u7238\u7238))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "quan(q1,\u672c,n1)&verb(q1,\u8cb7)&agent(q1,\u7238\u7238)&price(n1,329) \u2192quan(q3,\u5143,#)=quan(q1,\u672c,n1)\u00d7329&verb(q3,\u4ed8)&agent(q3,\u7238\u7238)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "Since \"quan(q1,\u672c,n1)\u00d7329 = 3\u00d7329 = 987\", the consequence of the above inference will generate three new facts \"quan(q3, \u5143, #) = 987\", \"verb(q3, \u4ed8)\" and \"agent(q3, \u7238\u7238)\". The semantics of the consequence is \"\u7238\u7238\u4ed8 987 \u5143(Papa pays 987 dollars)\". Likewise, the consequence of another binding of this inference rule will also generate three new facts \"quan(q4, \u5143, #) = 930\", \"verb(q4, \u4ed8)\" and \"agent(q4, \u7238\u7238)\". By taking these new facts into account, the utility call \"Sum(quan(?q,\u5143,#), verb(?q,\u4ed8)&agent(?q,\u7238\u7238))\" can thus return the correct answer \"1917\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "Furthermore, the unification process in a conventional IE is based on string-matching. The expression \"qaun(?q, \u679d, \u7b46)\" can be unified with a fact \"quan(q1, \u679d, \u7b46)\". However, it cannot be unified with the fact \"quan(q2, \u6735, \u82b1)\". String-matching guarantees that the IE will not operate on undesired quantities. But, it sometimes prevents the IE from operating on desired quantities. For instance, in Figure 4 , two quantity-facts \"quan(q1,\u679d,n1 p ) = 2361\" and \"quan(q2,\u679d,n2 p ) = 1587\" are converted from \"2361 \u679d\u7d05\u7b46(2361 red pens)\" and \"1587 \u679d\u85cd \u7b46(1587 blue pens)\", respectively. The first argument of \"Sum(\u2026)\" is \"quan(?q, \u679d, \u7b46)\" because \"\u5e7e\u679d\u7b46(how many pens)\" is concerned in the question. The conventional unification is not able to unify \"quan(?q, \u679d, \u7b46)\" to either \"quan(q1, \u679d, n1 p )\" or \"quan(q2, \u679d, n2 p )\" due to different strings of the third arguments. However, from the semantic point of view, \"quan(?q, \u679d, \u7b46)\" should be unified with both \"quan(q1, \u679d, n1 p )\" and \"quan(q2, \u679d, n2 p )\", because n1 p and n2 p represent \"\u7d05\u7b46(red pens)\" and \"\u85cd\u7b46(blue pens)\" respectively (and either one is a kind of \"\u7b46(pen)\").", "cite_spans": [], "ref_spans": [ { "start": 396, "end": 404, "text": "Figure 4", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "\u6587\u5177\u5e97\u9032\u8ca8 2361 \u679d\u7d05\u7b46\u548c 1587 \u679d\u85cd\u7b46(A stationer bought 2361 red pens and 1587 blue pens), \u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7b46(How many pens did the stationer buy)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "quan(q1,\u679d,n1 p )=2361&verb(q1,\u9032\u8ca8)&agent(q1,\u6587\u5177\u5e97)&head(n1 p ,\u7b46)&color(n1 p ,\u7d05) quan(q2,\u679d,n2 p )=1587&verb(q2,\u9032\u8ca8)&agent(q2,\u6587\u5177\u5e97)&head(n2 p ,\u7b46)&color(n2 p ,\u85cd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "ASK Sum(quan(?q,\u679d,\u7b46),verb(?q,\u9032\u8ca8)&agent(?q,\u6587\u5177\u5e97))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Operation", "sec_num": "3.4.1" }, { "text": "Therefore, a semantic matching method is proposed to be incorporated into the unification procedure. The idea is to match the semantic constituent sets of the two arguments Designing a with Reasoning and Explanation involved in unification. For example, while matching the third arguments of two functions during unifying the request 6 \"quan(?q, \u679d, \u7b46)\" with the fact \"quan(q1, \u679d, n1 p )\", IE will construct and compare two semantic constituent sets, one is for \"\u7b46\" and the other is for \"n1 p \". Let SCS denote \"semantic constituent set\" and SCS(x) denote the semantic constituent set of x. In our approach, \"SCS(\u7b46) = {\u7b46}\" 7 and \"SCS(n1 p ) = {\u7b46, color(\u7d05)}\" 8 . Since \"SCS(\u7b46)\" is covered by the \"SCS(n1 p )\", \"quan(?q, \u679d, \u7b46)\" can be unified with \"quan(q1, \u679d, n1 p )\". Likewise, \"quan(?q, \u679d, \u7b46)\" can be unified with \"quan(q2, \u679d, n2 p )\" because \"SCS(n2 p ) = {\u7b46, color(\u85cd)}\" covers \"SCS(\u7b46)\". As the result, the utility call \"Sum(quan(?q,\u679d,\u7b46), verb(?q,\u9032\u8ca8)&agent(?q,\u6587\u5177\u5e97))\" will obtain the correct answer \"3948\". On the other hand, if the question is \"\u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7d05\u7b46(How many red pens did the stationer buy)?\", the request will become \"quan(?q, \u679d, n3 p )\", where n3 p is a pseudo nonterminal consisting of the terminals \"\u7d05(red)\" and \"\u7b46(pen)\" under the noun phrase \"\u5e7e\u679d\u7d05\u7b46(how many red pens)\". Since \"SCS(n3 p ) = {\u7b46, color(\u7d05)}\", \"quan(?q, \u679d, n3 p )\" can be unified only with \"quan(q1, \u679d, n1 p )\". It cannot be unified with \"quan(q2, \u679d, n2 p )\" because SCS(n3 p ) cannot be covered by SCS(n2 p ). Therefore, the quantity of \"\u85cd\u7b46(blue pens)\" will not be taken into account for the question \"\u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7d05\u7b46(How many red pens did the stationer buy)?\".", "cite_spans": [ { "start": 334, "end": 335, "text": "6", "ref_id": "BIBREF110" }, { "start": 622, "end": 623, "text": "7", "ref_id": "BIBREF111" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 4. An example for requiring semantic-matching", "sec_num": null }, { "text": "Since we might adopt the verb \"\u8cb7(buy)\" in the body text \"\u7238\u7238\u8cb7\u4e86 3 \u672c 329 \u5143\u7684\u6545\u4e8b\u66f8 (Papa bought three $329 books)\", but adopt the verb \"\u4ed8(pay)\" in the question text \"\u7238\u7238\u5171\u8981 \u4ed8\u5e7e\u5143(How much money did Papa pay)\uff1f\" (as illustrated in the previous section), we need the knowledge that \"buy\" implies \"pay\" to perform logic binding (Moldovan & Rus, 2001 ). Verb entailment is thus required to identify whether there is an entailment relation between these two verbs (Hashimoto et al., 2009) . Verb entailment detection is an important function for the IE (de Salvo Braz et al., 2006) , as it can indicate the event progress and the status changing. In the math problem \"Bill had no money. Mom gave Bill two dollars, and Dad gave Bill three dollars. How much money Bill had then?\", the entailment between \"give (\u7d66)\" and \"have (\u6709)\" can update the status of Bill from \"no money\", then \"two dollars\", and to the final 6 An FOL predicate/function in an IE utility or in the premise of an inference rule is called a request. A request usually consists of FOL variables. 7 The SCS of a terminal consists of the terminal string only (e.g., \"SCS(\u7b46) = {\u7b46}\"). 8 SCS(n1 p ) is constructed by two steps. First, enumerate all facts whose first arguments are n1 p . Second, for each enumerated fact, denote the predicate name as Child-Role and the SCS of the second argument as Child-SCS. If Child-Role is \"head\", put the elements of Child-SCS into SCS(n1 p ). Otherwise, for each string s in Child-SCS, put the string \"Child-Role(s)\" into SCS(n1 p ). In the first step, the facts \"head(n1 p , \u7b46)\" and \"color(n1 p , \u7d05)\" are picked out. In the second step, the strings \"\u7b46\" and \"color(\u7d05)\" are put into SCS(n1 p ).", "cite_spans": [ { "start": 313, "end": 334, "text": "(Moldovan & Rus, 2001", "ref_id": null }, { "start": 447, "end": 471, "text": "(Hashimoto et al., 2009)", "ref_id": null }, { "start": 536, "end": 564, "text": "(de Salvo Braz et al., 2006)", "ref_id": null }, { "start": 895, "end": 896, "text": "6", "ref_id": "BIBREF110" }, { "start": 1045, "end": 1046, "text": "7", "ref_id": "BIBREF111" } ], "ref_spans": [], "eq_spans": [], "section": "Verb Entailment (Jurafsky & Martin, 2000)", "sec_num": "3.4.2" }, { "text": "answer \"five dollars\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb Entailment (Jurafsky & Martin, 2000)", "sec_num": "3.4.2" }, { "text": "We define the verb entailment problem as follows: given an ordered verb pair \"(v1, v2)\" as input, we want to detect whether the entailment relation 'v1 \u2192 v2' holds for this pair. E-HowNet (Chen et al., 2009; Huang et al., 2014) is adopted as the knowledge base for solving this problem. For the previous example verb \"give (\u7d66)\", we can find its conflation of events, which has been described as the phenomenon involved in predicates where the verb expresses a co-event or accompanying event, rather than the main event (Talmy, 1972; Haugen, 2009; Mateu, 2012) , from E-HowNet as shown in Figure 5 . The conflations of events are defined by predicates and their arguments (Huang et al., 2015) , as shown in Figure 5 . Verb entailment is vital for solving the elementary school math problem. Consider the following math problem as a simple example:", "cite_spans": [ { "start": 188, "end": 207, "text": "(Chen et al., 2009;", "ref_id": "BIBREF80" }, { "start": 208, "end": 227, "text": "Huang et al., 2014)", "ref_id": null }, { "start": 519, "end": 532, "text": "(Talmy, 1972;", "ref_id": null }, { "start": 533, "end": 546, "text": "Haugen, 2009;", "ref_id": null }, { "start": 547, "end": 559, "text": "Mateu, 2012)", "ref_id": null }, { "start": 671, "end": 691, "text": "(Huang et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 588, "end": 596, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 706, "end": 714, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Verb Entailment (Jurafsky & Martin, 2000)", "sec_num": "3.4.2" }, { "text": "\u8001\u5e2b\u539f\u6709 9 \u679d\u925b\u7b46,\u9001\u7d66\u5c0f\u670b\u53cb 5 \u679d\u5f8c,\u8001\u5e2b\u9084\u6709\u5e7e\u679d\u7b46\uff1f(The teacher has 9 pencils. After giving his students 5 pencils, how many pencils he has?) The verbs are \"\u6709(have)\" and \"\u9001\u7d66(give as a gift)\" in this problem. If we want to derive the concept of \"\u6709(have)\" from \"\u9001\u7d66(give as a gift)\", we can follow the direction of their definitions in E-HowNet: \"\u9001\u7d66(give as a gift)\" is a hyponym of \"\u7d66(give)\", and one of its implication from the conflation of events is \"\u5f97\u5230(obtain)\", which is a hyponym of \"\u6709 (have)\". However, for the four verbs in this derivation, implications are defined only in the verb \"\u7d66(give)\". As we can see, given all those definitions of words in E-HowNet, we need to find a valid path (which may involve word sense disambiguation) to determine whether there is an entailment between two verbs. Therefore, we need a model to automatically build the relations of these verbs by finding paths from E-HowNet or other resources, and then rank or validate these paths to find the verb entailment. The conflation of events also indicates that when the entailed verb pair is detected, we may further map semantic roles of these two verbs to Designing a with Reasoning and Explanation proceed the inference and find the solution (Wang & Zhang, 2009 ).", "cite_spans": [ { "start": 1208, "end": 1227, "text": "(Wang & Zhang, 2009", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Verb Entailment (Jurafsky & Martin, 2000)", "sec_num": "3.4.2" }, { "text": "Since the accuracy rate of the Top-1 SR tree cannot be 100%, and the decisions made in the following phases (i.e., STC, LFC and IE) are also uncertain, we need a statistical framework to handle those non-deterministic phenomena. Under this framework, the problem of getting the desired answer for a given WMP can be formulated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Statistical Framework", "sec_num": "4." }, { "text": "\uf0b7 \uf028 \uf029 arg max P , Ans Ans Ans Body Qus \uf03d (1) Where \uf0b7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Statistical Framework", "sec_num": "4." }, { "text": "Ans is the obtained answer, Ans denotes a specific possible answer, Body denotes the given body text of the problem, and Qus denotes the question text of the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Statistical Framework", "sec_num": "4." }, { "text": "The probability factor in the above equation can be further derived as follows via introducing some related intermediate/latent random variables: ST : Solution Type. In the above equation, we will further assume that P(Ans|IR,LF B ,LF Q )\u2248P(Rm), where Rm is the remaining logic factors in LF Q after the IE has bound it with LF B (with referring to the knowledge-base adopted). Last, Viterbi decoding (Seshadri & Sundberg, 1994) could be used to search the most likely answer with the above statistical model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Statistical Framework", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 P , P , ,", "eq_num": ", , , , , max P , , , , , , , max P" } ], "section": "Proposed Statistical Framework", "sec_num": "4." }, { "text": "To obtain the associated parameters of the model, we will first get the initial parameter-set from a small seed corpus annotated with various intermediate/latent variables involved in the model. Afterwards, we perform weakly supervised learning (Artzi & Zettlemoyer, 2013) on a partially annotated training-set (in which only the answer is annotated with each question). That is, we iteratively conduct beam-search (with the parameter-set obtained from the last iteration) on this partially annotated training-set starting from the given body text (and question text) to the final obtained answer. If the annotated answer match some of the obtained answers (within the search-beam), simply pick up the matched path with the maximal likelihood value. We then re-estimate the parameter-set (of the current iteration) from those picked up paths. If the annotated answer cannot match any of the obtained answers (within the search-beam), we simply drop that case, and then repeat the above re-estimation procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Statistical Framework", "sec_num": "4." }, { "text": "Currently, we have completed all the associated modules (including Word Segmenter, Syntactic Parser, Semantic Composer, STC, LFC, IE, and Explanation Generation), and have manually annotated 75 samples (from our elementary school math corpus) as the seed corpus (with syntactic tree, semantic tree, logic form, and reasoning chain annotated). Besides, we have cleaned the original elementary school math corpus and encoded it into the appropriate XML format. There are total 23,493 problems from six different grades; and the average number of words of the body text is 18.2 per problem. Table 3 shows the statistics of the converted corpus. We have completed a prototype system which is able to solve 11 different solution types (including Multiplication, Summation, Subtraction, Floor-Division, Algebra, Comparison, Surplus, Difference, Ceil-Division, Common-Division and Addition) , and have tested it on the seed corpus. The success of our pilot run has demonstrated the feasibility of the proposed approach. We plan to use the next few months to perform weakly supervised learning, as mentioned above, and fine tune the system.", "cite_spans": [ { "start": 730, "end": 883, "text": "(including Multiplication, Summation, Subtraction, Floor-Division, Algebra, Comparison, Surplus, Difference, Ceil-Division, Common-Division and Addition)", "ref_id": null } ], "ref_spans": [ { "start": 588, "end": 595, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Current Status and Future Work", "sec_num": "5." }, { "text": "To the best of our knowledge, all those MWP solvers proposed before year 2014 adopted the rule-based approach (Mukherjee & Garain, 2008 (Bobrow, 1964; Slagle, 1965) used format matching to map the input English sentence into the corresponding logic statement (all start with predicate \"EQUAL\"). Another system, WORDPRO, was developed by Fletcher (1985) to understand and solve simple one-step addition and subtraction arithmetic word problems designed for third-grade children. It did not accept the surface representation of text as input. Instead it begins with a set of propositions (manually created) that represent the text's meaning. Afterwards, the problem was solved with a set of rules (also called schemas), which matched the given proposition and then took the corresponding actions. Besides, it adopted key word match to obtain the answer.", "cite_spans": [ { "start": 110, "end": 135, "text": "(Mukherjee & Garain, 2008", "ref_id": "BIBREF11" }, { "start": 136, "end": 150, "text": "(Bobrow, 1964;", "ref_id": null }, { "start": 151, "end": 164, "text": "Slagle, 1965)", "ref_id": null }, { "start": 337, "end": 352, "text": "Fletcher (1985)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "Solving the problem with schemata was then adopted in almost every later system (Mukherjee & Garain, 2008) . In 1986, ARITHPRO was designed with an inheritance network in which word classes inherit attributes from those classes above them on a verb hierarchy (Dellarosa, 1986) . The late development of ROBUST (Bakman, 2007) demonstrated how it could solve free format word problems with multi-step arithmetic through splitting one single sentence into two formula propositions. In this way, transpositions of problem sentences or additional irrelevant data to the problem text do not affect the problem solution. However, it only handles state change scenario. In 2010, Ma et al. (Ma et al., 2010 ) proposed a MSWPAS system to simulate people's arithmetic multi-step addition and subtraction word problems behavior. It uses frame-based calculus and means-end analysis (AI planning) to solve the problem with pre-specified rules. In 2012, Liguda and Pfeiffer (Liguda & Pfeiffer, 2012) proposed a model based on augmented semantic networks to represent the mathematical structure behind word problems. It read and solved mathematical text problems from German primary school books. With more attributes associated with the semantic network, it claimed that the system was able to solve multi-step word problems and complex equation systems and was more robust to irrelevant information. Also, it was declared that it was able to solve all classes of problems that could be solved by the schema-based systems, and could solve around 20 other classes of word problems from a school book which were in most cases not solvable by other systems.", "cite_spans": [ { "start": 80, "end": 106, "text": "(Mukherjee & Garain, 2008)", "ref_id": "BIBREF11" }, { "start": 259, "end": 276, "text": "(Dellarosa, 1986)", "ref_id": null }, { "start": 303, "end": 324, "text": "ROBUST (Bakman, 2007)", "ref_id": null }, { "start": 681, "end": 697, "text": "(Ma et al., 2010", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "Recently, Hosseini et al. (2014) proposed a Container-Entity based approach, which solved the math word problem with a state transition sequence. Each state consists of a set of containers, and each container specifies a set of entities identified by a few heuristic rules. How the quantity of each entity type changes depends on the associated verb category. Each time a verb is encountered, it will be classified (via a SVM, which is the only statistical module adopted) into one of the seven categories which pre-specify how to change the states of associated entities. Therefore, logic inference is not adopted. Furthermore, the anaphora and co-reference are left un-resolved, and it only handles addition and subtraction. Kushman et al. (2014) proposed the first statistical approach, which used a few heuristic rules to extract the algebra equation templates (consists of variable slots and number slots) from a set of problems annotated with equations. For a given problem, all possible variable/number slots are identified first. Afterwards, they are aligned with those templates. The best combination of the template and alignment (scored with a statistical model) is then picked up. Finally, the answer is obtained from those equations instantiated from the selected template. However, without really understanding the problem (i.e., no semantic analysis is performed), the performance that this approach can reach is limited; also, it is sensitive to those irrelevant statements (Hosseini et al., 2014) . Furthermore, it can only solve algebra related problems. Last, it cannot explain how the answer is obtained.", "cite_spans": [ { "start": 727, "end": 748, "text": "Kushman et al. (2014)", "ref_id": null }, { "start": 1490, "end": 1513, "text": "(Hosseini et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "The most recent statistical approach was proposed by Roy et al. 2015, which used 4 cascade statistical classifiers to solve the elementary school math word problems: quantity identifier (used to find out the related quantities), quantity pair classifier (used to find out the operands), operation classifier (used to pick an arithmetic operation), and order classifier (used to order operands for subtraction and division cases). It not only shares all the drawbacks associated with Kushman et al. 2014, but also limits itself for allowing only one basic arithmetic operation (i.e., among addition, subtraction, multiplication, division) with merely 2 or 3 operand candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "Our proposed approach differs from those previous approaches by combining the statistical framework with logic inference. Besides, the tag-based approach adopted for selecting the appropriate information also distinguishes our approach from that of others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "A tag-based statistical framework is proposed in this paper to perform understanding and reasoning for solving MWP. It first analyzes the body and question texts into their corresponding semantic trees (with anaphora/ellipse resolved and semantic role labeled), and then converted them into their associated tag-based logic forms. Afterwards, the inference (based on the question logic form) is performed on the logic facts derived from the body text. The combination of the statistical frame and logic inference distinguishes the proposed approach from other approaches. Comparing to those rule-based approaches, the proposed statistical approach alleviates the ambiguity resolution problem; also, our tag-based approach provides the flexibility of handling various kinds of related questions with the same body logic form. On the other hand, comparing to those purely statistical approaches, the proposed approach is more robust to the irrelevant information and could more accurately provide the answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "The contributions of our work mainly lie in: (1) proposing a tag-based logic representation which makes the system less sensitive to the irrelevant information and could provide answer more precisely; (2) proposing a statistical framework for performing reasoning from the given text. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Since Big Data mainly aims to explore the correlation between surface features but not their underlying causality relationship (Mayer-Sch\u00f6nberger & Cukier, 2013), the \"Big Mechanism\" program 1 has been proposed by DARPA to find out \"why\" behind the big data. However, the pre-requisite for it is that the machine can read each document and learn its associated knowledge, which is the task of Machine Reading (MR) (Strassel et al., 2010) . Therefore, the Natural Language and Knowledge Processing Group (under the Institute of Information Science) of Academia Sinica formally launched a 3-year MR project (from January 2015) to attack this problem.", "cite_spans": [ { "start": 414, "end": 437, "text": "(Strassel et al., 2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Since a domain-independent MR system is difficult to build, the Math Word Problem (MWP) (Mukherjee & Garain, 2008) is chosen as our first test case to study MR. The main reason for that is that it not only adopts less complicated syntax but also requires less amount of domain knowledge; therefore, the researcher can focus more on text understanding and \uf02a Institute of Information Science , Academia Sinica 128 Academia Road, Section 2, Nankang, Taipei 11529, Taiwan E-mail: { joecth; lyc; kysu}@iis.sinica.edu.tw 1 http://www.darpa.mil/Our_Work/I2O/Programs/Big_Mechanism.aspx reasoning (instead of looking for a wide coverage parser and acquiring considerable amount of domain knowledge). We thus also choose it as the goal of the first year for studying the MR problem, and propose a tag-based statistical approach (Lin et al., 2015) to find out the answer.", "cite_spans": [ { "start": 88, "end": 114, "text": "(Mukherjee & Garain, 2008)", "ref_id": "BIBREF11" }, { "start": 819, "end": 837, "text": "(Lin et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The architecture of this proposed approach is shown in Figure 1 . First, every sentence in the MWP, including both body text and the question text, is analyzed by the Language Analysis module, which transforms each sentence into its corresponding semantic representation tree. The sequence of semantic representation trees is then sent to the Problem Resolution module, which adopts logic inference approach, to obtain the answer of each question in the MWP. Finally, the Explanation Generation module will explain how the answer is found (in natural language text) according to the given reasoning chain (Russell & Norvig, 2009 ) (which includes all related logic statements and inference steps to reach the answer). As depicted in Figure 1 (b), the Problem Resolution module in the proposed system consists of three components: Solution Type Classifier (TC), Logic Form Converter (LFC) and Inference Engine (IE). The TC is responsible to assign a math operation type for every question of the MWP. In order to perform logic inference, the LFC first extracts the related facts from the given semantic representation tree and then represents them in First Order Logic (FOL) predicates/functions form (Russell & Norvig, 2009) . In addition, it is also responsible for transforming every question into an FOL-like utility function according to the assigned solution type. Finally, according to inference rules, the IE derives new facts from the old ones provided by the LFC. Besides, it is also responsible for providing utilities to perform math operations on related facts.", "cite_spans": [ { "start": 605, "end": 628, "text": "(Russell & Norvig, 2009", "ref_id": "BIBREF16" }, { "start": 1200, "end": 1224, "text": "(Russell & Norvig, 2009)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 733, "end": 741, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In addition to understanding the given text and then performing inference on it, a very desirable characteristic of an MWP solver (also an MR system) is being able to explain how the answer is obtained in a human comprehensible way. This task is done by the Explanation Explanation Generation for a Math Word Problem Solver 29 Generator (EG) module, which is responsible to explaining the associated reasoning steps in fluent natural language from the given reasoning chain (Russell & Norvig, 2009) . In other words, explanation generation is the process of constructing natural language outputs from a non-linguistic input, and is a task of Natural Language Generation (NLG).", "cite_spans": [ { "start": 474, "end": 498, "text": "(Russell & Norvig, 2009)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Various applications of NLG (such as weather report) have been proposed before (Halliday, 1985; Goldberg et al., 1994; Paris & Vander Linden, 1996; Milosavljevic, 1997; Paris et al., 1998; Coch, 1998; Reiter et al., 1999) . However, to the best of our knowledge, none of them discusses how to generate the explanation for WMP, which possesses some special characteristics (e.g., math operation oriented description) that are not shared with other tasks.", "cite_spans": [ { "start": 79, "end": 95, "text": "(Halliday, 1985;", "ref_id": "BIBREF3" }, { "start": 96, "end": 118, "text": "Goldberg et al., 1994;", "ref_id": "BIBREF2" }, { "start": 119, "end": 147, "text": "Paris & Vander Linden, 1996;", "ref_id": "BIBREF12" }, { "start": 148, "end": 168, "text": "Milosavljevic, 1997;", "ref_id": "BIBREF10" }, { "start": 169, "end": 188, "text": "Paris et al., 1998;", "ref_id": "BIBREF13" }, { "start": 189, "end": 200, "text": "Coch, 1998;", "ref_id": null }, { "start": 201, "end": 221, "text": "Reiter et al., 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A typical architecture for NLG is shown at Figure 2 , which is re-drawn from Jurafsky and Martin (Jurafsky & Martin, 2000) . Under this architecture, Communicative Goal, which specifies the purpose for communication, and Knowledge Base, which specifies the content to be generated, are fed as the inputs to Discourse Planner. The Discourse Planner will then output a hierarchy form to the Surface Realizer, which further solves the issues of selecting lexicons, functional words, lexicon order in the sentence, syntactic form, subject-verb agreement (mainly required for English), tense (mainly required for English), and so on for the texts to be generated.", "cite_spans": [ { "start": 97, "end": 122, "text": "(Jurafsky & Martin, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To implement the Discourse Planner, D. Jurafsky (Jurafsky & Martin, 2000) proposed to adopt text schemata and rhetorical structure planning to implement the Discourse Planner. On the other hand, Kay proposed to implement the Surface Realizer with both Systemic Grammar, which is a part of Systemic Functional Linguistic proposed by Halliday (Halliday, 1985) , and Functional Unification Grammar (Kay, 1979) .", "cite_spans": [ { "start": 48, "end": 73, "text": "(Jurafsky & Martin, 2000)", "ref_id": "BIBREF5" }, { "start": 341, "end": 357, "text": "(Halliday, 1985)", "ref_id": "BIBREF3" }, { "start": 395, "end": 406, "text": "(Kay, 1979)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2. A typical architecture for NLG systems (Jurafsky & Martin, 2000)", "sec_num": null }, { "text": "Since the description for math operation centering on an operator is in a relatively fixed textual format, which is disparate from other kinds of NLG tasks, those approaches mentioned above might be over-killed for the task of MWP explanation generation (and thus introduce unnecessary complexity). Therefore, we propose an operator oriented approach to search each math operator involved in the reasoning chain. For each math operator, we generate one sentence. Since explaining math operation does not require complicated syntax, a specific template is adopted to generate the text for each kind of math operator. To the best of our knowledge, this is the first approach that is specifically tailored to the MWP task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. A typical architecture for NLG systems (Jurafsky & Martin, 2000)", "sec_num": null }, { "text": "Our main contributions are listed as following,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. A typical architecture for NLG systems (Jurafsky & Martin, 2000)", "sec_num": null }, { "text": "We proposed a math operation oriented Explanation Tree for facilitating the discourse work on MWP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "We propose an operator oriented algorithm to segment the Explanation Tree into various sentences, which makes our Discourse Planner universal for MWP and independent to the language adopted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "We propose using operator-based templates to generate the natural language text for explaining the associated math operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "The remainder of this paper is organized as follows: Section 2 introduces the framework of our Explanation Generator. Afterwards, various templates of more operators (other than SUM used in Section 2) are introduced in Section 3. Section 4 discusses the future work of our explanation system. Section 5 then reviews the related works. Finally, the conclusions are drawn in Section 6. Figure 3 shows the block diagram of our proposed EG. First, the Inference Engine generates the answer and its associated reasoning chain for the given MWP. First, to ease the operation of the EG, we convert the given reasoning chain into its corresponding Explanation Tree (shown at Figure 5 ) to center on each operator appearing in the reasoning chain (such that it is convenient to perform sentence segmentation later). Next, the Explanation Tree will be fed as input to the Discourse Planner, which divides the given Explanation Tree into various subtrees such that each subtree will generate one explanation sentence later. Finally, the Function Word Insertion & Ordering Module will insert the necessary functional words and order them with those extracted content words (from the segmented Explanation Subtee) to generate the Explanation Texts.", "cite_spans": [], "ref_spans": [ { "start": 384, "end": 392, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 667, "end": 675, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Following example demonstrates how the framework works. And Figure 4 (a) reveals more details for each part illustrated in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 68, "text": "Figure 4", "ref_id": "FIGREF7" }, { "start": 123, "end": 131, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Figure 3. Block Diagram of the proposed MWP Explanation Generator", "sec_num": null }, { "text": "[Sample-1] \u963f\u5fd7\u8cb7\u4e00\u81fa\u51b0\u7bb1\u548c\u4e00\u81fa\u96fb\u8996\u6a5f\uff0c\u4ed8 2 \u758a\u4e00\u842c\u5143\u9214\u7968\u30016 \u5f35\u5343\u5143\u9214\u7968\u548c 13 \u5f35\u767e\u5143 \u9214\u7968\uff0c\u963f\u5fd7\u5171\u4ed8\u4e86\u5e7e\u5143\uff1f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. Block Diagram of the proposed MWP Explanation Generator", "sec_num": null }, { "text": "(A-Zhi bought a refrigerator and a TV. He paid 2 stacks of ten-thousand-dollar bill, six thousand-dollar bills and 13 hundred-dollar bills. How many dollars did A-Zhi pay in total?) Facts Generation in Figure 4 (a) shows how the body text is transformed into meaningful logic facts to perform inference. In math problems, the facts are mostly related to quantities. The generated facts are either the quantities explicitly appearing in the sentence of the problem or the implicit quantities deduced by the IE. Those generated facts are linked together within the reasoning chain constructed by the IE as shown in Figure 4 (b). Within this framework, the discourse planner is responsible for selecting the associated content for each sentence to be generated. A typical reasoning chain, represented with an Explanation Tree structure, is shown at Figure 4 (b). The operator-node (OP_node) layers and quantity-node (Quan_node) layers are interleaved within the Explanation Tree, and serving as the input data structure to OP Oriented Algorithm in Discourse Planner, which will be further presented as pseudo code in Section 2.2 (Algorithm 1). As shown at Figure 4 (b), the (#a, #b) pair denotes facts derived from the body sentences. The OP means the operator used to deduce implicit facts and represented as non-leaf circle nodes. Each \"G?\" expresses a sentence to be generated. Given the reasoning chain, the first step is to decide how many sentences will be generated, which corresponds to the Discourse Planning phase (Jurafsky & Martin, 2000) of the traditional NLG task. Currently, we will generate one sentence for each operator shown in the reasoning chain. For the above example, since there are four operators (three IE-Multiplications 2 and one LFC-Sum), we will have four corresponding sentences; and the associated nodes (i.e., content) are circled by \"G?\" for each sentence in the figure.", "cite_spans": [ { "start": 1521, "end": 1546, "text": "(Jurafsky & Martin, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 202, "end": 210, "text": "Figure 4", "ref_id": "FIGREF7" }, { "start": 613, "end": 621, "text": "Figure 4", "ref_id": "FIGREF7" }, { "start": 846, "end": 854, "text": "Figure 4", "ref_id": "FIGREF7" }, { "start": 1153, "end": 1161, "text": "Figure 4", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Figure 3. Block Diagram of the proposed MWP Explanation Generator", "sec_num": null }, { "text": "Furthermore, Figure 5 shows that three sets of facts are originated from the 2 nd body sentence (indicated by three S2 nodes). Each set contains a corresponding quantity-fact (i.e., q1(\u758a), q2(\u5f35), and q3(\u5f35)) and its associated object (i.e., n1, n2, and n3). For example, the first set (the left most one) contains q1(\u758a) (for \"2 \u758a\") and n1 (for \"\u4e00\u842c\u5143\u9214\u7968\"). This figure also shows that the outputs of three IE-Multiplication operators (i.e., \"20,000 \u5143\", \"6,000 \u5143\", and \"1,300 \u5143\") will be fed into the last LFC-Sum to get the final desired result \"27,300 \u5143\" (denoted by the \"Ans(SUM)\" node in the figure).", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Figure 3. Block Diagram of the proposed MWP Explanation Generator", "sec_num": null }, { "text": "After having given the corresponding content (associated with those nodes within the big circle), we need to generate the corresponding sentence with appropriate function words added. This step corresponds to the Surface Realization phase (Jurafsky & Martin, 2000) in NLG. Currently, since the syntax of the explanation text of our task is not complicated, we use various templates to take into account the pre-specified fillers (\" \") and the slots to be filled (\" \" and \" \") and their order for generating the desired explanation sentence. Figure 4 (c) shows how a sentence is generated from a selected template based on the given Explanation Tree. ", "cite_spans": [ { "start": 239, "end": 264, "text": "(Jurafsky & Martin, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 541, "end": 549, "text": "Figure 4", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Figure 3. Block Diagram of the proposed MWP Explanation Generator", "sec_num": null }, { "text": "The original reasoning chain resulted from the IE is actually a stream of chunks (as shown in Figure 4 (a)), in which the causal chain is implicitly embedded. Therefore, it is not suitable for explaining inference steps. The Explanation Tree Builder is thus adopted to build up the Explanation Tree, which centers on the math operations involved in the inference process, to explicitly express the causal chain implied.", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 102, "text": "Figure 4", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Explanation Tree Builder", "sec_num": "2.1" }, { "text": "The Explanation Tree Builder first receives various facts, as a stream of chunks, from the IE. It then creates the nodes of the Explanation Tree according to the content of those chunks. After the Explanation Tree is created, it serves as the corresponding reasoning chain for the following process since then.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Tree Builder", "sec_num": "2.1" }, { "text": "With the root node serving as the Answer, which is a Quan_node, the Explanation Tree is interleaved with Quan_node layers and OP_node layers, as shown in Figure 4 (b). Each OP_node has one Quan_node as its parent node, and has at least one Quan_node as it's child node. On the other hand, each Quan_node (except the root node) serves as the input to an OP_node. With the Explanation Tree, the work of discourse planning can be simply done via traversing those OP_nodes, which will be described in the following section.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 162, "text": "Figure 4", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Explanation Tree Builder", "sec_num": "2.1" }, { "text": "In NLG, the discourse planner selects the content from the knowledge base according to what should be presented in the output text, and then structures them coherently. To facilitate the explanation process, we first convert the given reasoning chain to its corresponding Explanation Tree, as shown at Figure 4(b) to ease the following operations. The Explanation Tree is adopted because its structure allows us to regard the OP as a basis to do sentence segmentation for the deductive steps adopted in MWP. Within the Explanation Tree, the layers of OP nodes are interleaved with the layers of quantity nodes, and the root-node is the quantity node which denotes the desired Answer. After having constructed the Explanation Tree, we need to know how to group the nodes within the tree to make a sentence. As one can imagine, there are various ways to combine different quantities and operators (within the Explanation Tree) into a sentence: you can either explain several operations within one complicated sentence, or explain those operations with several simple sentences. Discourse planner therefore controls the process for generating the discourse structure, which mainly decides how to group various Explanation Tree nodes into different discourse segments. The proposed OP Oriented Algorithm, as shown above, is introduced to organize various Explanation Tree nodes into different groups (each of them will correspond to a sentence to be generated). Basically, it first locates the lowest operation node, and then traverses each operation node from left to right (with the same parent node) and bottom to top. For each operation node found, it will group the related nodes around that operation node into one discourse segment (i.e., one sentence). For each group, it will call the Surface Realizer module to generate the final sentence. It is named \"OP oriented\" because every generated sentence in the explanation text is based on one operator, which serves as a central hub to associate all quantities directly linked with it. Also, the template for building up a sentence is selected based on the associated operator, which will be further introduced in Section 2.3. Figure 6 shows three grouped explanation subtrees within the original explanation tree. The arrows between SUM node and its children show the sequence of those subtrees to be presented, and the numbers imposed on tree nodes indicate the indexes of the corresponding sentence to be generated.", "cite_spans": [ { "start": 302, "end": 313, "text": "Figure 4(b)", "ref_id": null } ], "ref_spans": [ { "start": 2179, "end": 2187, "text": "Figure 6", "ref_id": "FIGREF10" } ], "eq_spans": [], "section": "Sentence Segmenter (Discourse Planner)", "sec_num": "2.2" }, { "text": "The sentence segmenter module discussed previously only partitions the explanation tree into various Explanation Subtrees. It has no control over how the components within an explanation subtree should be positioned. Also, we frequently need to insert extra functional words (sometimes even verbs) such as \"\u5c31\u662f\"\u3001\"\u5171\u662f\"\u3001\"\u7b49\u540c\u65bc\" (\"are\", \"equal\", \"mean\") and the like to have a fluent sentence. For example, in Sample-1, to explain what \"2 \u758a\u4e00\u842c\u5143\" (2 stacks of 10-thousand-dollar bill) means, we need an extra functional word \"\u5c31\u662f\" (\"are\") (or \"\u5171\u662f\"\u3001\"\u7b49\u540c\u65bc\" (\"equal\", \"mean\") and the like) to make the sentence readable. Furthermore, people prefer to add \"\u6240\u4ee5\" (\"Thus\"), to explicitly hint that the following text is closely related to the answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Function Word Insertion and Ordering Module (Surface Realizer)", "sec_num": "2.3" }, { "text": "Since the syntax for explaining math operation is not complicated, we adopt the template approach to accomplish both tasks mentioned above in the same time. Currently, for each math operator, a corresponding template is manually created, which contains various slots that will be filled with contents from the nodes in Explanation Tree. Figure 6 shows the connection between a template and its associated Explanation Tree for Sample-1. It comprises three kinds of nodes: the answer-node (shown by the rectangle ) which denotes the final answer and is basically a Quan_node; the OP_nodes (shown by the diamond ) which denote associated operators; and the quantity-nodes (shown by the rounded-corner rectangle ) which represent the values extracted by the LFC or inferred by the IE.", "cite_spans": [], "ref_spans": [ { "start": 337, "end": 345, "text": "Figure 6", "ref_id": "FIGREF10" } ], "eq_spans": [], "section": "Function Word Insertion and Ordering Module (Surface Realizer)", "sec_num": "2.3" }, { "text": "Take the last explanation sentence of the above sample 1 as an example, \u6240\u4ee5\uff0c\u5171\u4ed8\u4e86 20000 + 6000 + 1300 = 27300 \u5143 Since its associated operator is \"SUM\", the template of \"SUM\" is first fetched and there are four slots to be filled. The arrow then directs the flow to \u2460 for \"20,000\" to be printed out and then SUM for the \"+\". Next on, the flow is directed to the middle child node, \u2461, and \"6,000\" is therefore outputted as the subsequent component in this sentence, and then it directs back to SUM again to print \"+\". Finally, the flow directs to the most right-hand-side node, \u2462, then goes back to SUM; the \"1,300\" is then popped out accordingly. We don't print out the \"+\" for the SUM this time since we know there's no more child node below the SUM node that hasn't been traversed. After all the child nodes are traversed and their contents are copied into the associated slots, the parent node, \u2463, is traversed and the text \"=27,300 \u5143\" is printed out to complete the explanation sentence. Algorithm 2 shows the Function Word Insertion and Ordering algorithm, which illustrates how the surface realizer is implemented. After the list S is initiated at Line 4, the operation type of the OP_node is checked at Line 7 to select a corresponding template, which is assigned to OPtemplate at Line 8 (each kind of operator has its own template). Take Sample-1 for example, the template shown in Figure 6 (a) is selected for the \"SUM\" operator. Following the \"Arrow\" notation mentioned above, contents of the OP_node and its connecting nodes are put into List S at Line 9. Later on, the nodes in List S are filled into the template described above at Line 10, which corresponds to the Benchmark shown in Figure 6(b) . Finally, at Line 12, the slots of OPtemplate are all filled with appropriate contents. It then returns them as an explanation sentence string. Since each question will be processed separately and a reasoning chain will be associated with only one question, there is no restriction for the number of allowable question sentences (as the proposed algorithm only handles one reasoning chain each time).", "cite_spans": [], "ref_spans": [ { "start": 1386, "end": 1394, "text": "Figure 6", "ref_id": "FIGREF10" }, { "start": 1694, "end": 1705, "text": "Figure 6(b)", "ref_id": "FIGREF10" } ], "eq_spans": [], "section": "Explanation Generation for a Math Word Problem Solver 37", "sec_num": null }, { "text": "As described in the previous section, the template adopted is closely related to the associated math operation. However, various templates share a meta-form with some common characteristics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Other Associated Templates", "sec_num": "3." }, { "text": "Explanation Generation for a Math Word Problem Solver 39 (1) Each operator generates a sentence.", "cite_spans": [ { "start": 57, "end": 60, "text": "(1)", "ref_id": "BIBREF106" } ], "ref_spans": [], "eq_spans": [], "section": "Some Other Associated Templates", "sec_num": "3." }, { "text": "(2) Each sentence is generated from the operator and the quantities connected to it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Other Associated Templates", "sec_num": "3." }, { "text": "(3) The operators and the quantities are inserted into the slots specified in the template. 4The instantiated template serves as the corresponding explanation sentence string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Other Associated Templates", "sec_num": "3." }, { "text": "Apart from the OP_SUM, this section introduces a few other templates associated with OP_MUL, OP_COMMON_DIVISION, and OP_UNIT_TRANS as follows. OP_MUL is related to Sample-1 mentioned above (Figure 7) . OP_COMMON_DIV is associated with Sample-2 ( Figure 8) . Also, Figure 9 shows the template associated with \"OP_UNIT_TRANS\" adopted in Sample-3. [Sample-2] 1 \u500b\u5e73\u5e74\u6709 365 \u5929\uff0c3 \u500b\u5e73\u5e74\u5171\u6709\u5e7e\u5929\uff1f (One common-year (non-leap year) has 365 days. How many days do 3 common-year have?) ", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 199, "text": "(Figure 7)", "ref_id": "FIGREF12" }, { "start": 246, "end": 255, "text": "Figure 8)", "ref_id": "FIGREF14" }, { "start": 264, "end": 272, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Some Other Associated Templates", "sec_num": "3." }, { "text": "Currently, 11 types of operators are supported. They are shown at Figure 10 . After having manually checked 37 MWP problems with their associated operations specified in Figure 10 , value1,value2) =value FloorDiv(value1,value2)=value CeilDiv(value1,value2)=value Surplus(value1,value2)=value ArgMin(arg,function,condition)=value ArgMax(arg,function,condition)=value UnitTrans(Old-Fact, New-Fact)=value", "cite_spans": [ { "start": 182, "end": 196, "text": "value1,value2)", "ref_id": null } ], "ref_spans": [ { "start": 66, "end": 75, "text": "Figure 10", "ref_id": "FIGREF1" }, { "start": 170, "end": 179, "text": "Figure 10", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Current Status", "sec_num": "4." }, { "text": "Operation Utilities Sum(function[,condition])=value Add(value1,value2)=value Subtract(value1,value2)=value Diff(value1,value2)=value Multiply(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current Status", "sec_num": "4." }, { "text": "it is observed that the proposed approach could generate fluent explanation for all of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10. Supported Operators by EG", "sec_num": null }, { "text": "Earlier reported NLG applications include generating weather reports (Goldberg et al., 1994; Coch, 1998) , instructions (Paris et al., 1998; Wahlster et al., 1993) , encyclopedia-like descriptions (Milosavljevic, 1997; Dale et al., 1998) , letters (Reiter et al., 1999) , and an alternative to machine translation (Hartley & Paris, 1997) which adopts the techniques of connectionist (Ward, 1994) and statistical techniques (Langkilde & Knight, 1998) . However, none of them touched the problem of generating explanation for MWPs.", "cite_spans": [ { "start": 69, "end": 92, "text": "(Goldberg et al., 1994;", "ref_id": "BIBREF2" }, { "start": 93, "end": 104, "text": "Coch, 1998)", "ref_id": null }, { "start": 120, "end": 140, "text": "(Paris et al., 1998;", "ref_id": "BIBREF13" }, { "start": 141, "end": 163, "text": "Wahlster et al., 1993)", "ref_id": "BIBREF18" }, { "start": 197, "end": 218, "text": "(Milosavljevic, 1997;", "ref_id": "BIBREF10" }, { "start": 219, "end": 237, "text": "Dale et al., 1998)", "ref_id": "BIBREF1" }, { "start": 248, "end": 269, "text": "(Reiter et al., 1999)", "ref_id": "BIBREF14" }, { "start": 314, "end": 337, "text": "(Hartley & Paris, 1997)", "ref_id": "BIBREF4" }, { "start": 383, "end": 395, "text": "(Ward, 1994)", "ref_id": "BIBREF19" }, { "start": 423, "end": 449, "text": "(Langkilde & Knight, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "Previous approaches of natural language generation typically consist of a discourse planner that plans the structure of the discourse, and a surface realizer that generates the real sentences (Jurafsky & Martin, 2000) . D. Jurafsky adopted the model of text schemata and rhetorical relation planning for discourse planning. Approaches for surface realizer include Systemic Grammar, which is a part of Systemic Functional Linguistic proposed by Halliday (Halliday, 1985) , and Functional Unification Grammar (FUG) by Kay (Kay, 1979) .", "cite_spans": [ { "start": 192, "end": 217, "text": "(Jurafsky & Martin, 2000)", "ref_id": "BIBREF5" }, { "start": 453, "end": 469, "text": "(Halliday, 1985)", "ref_id": "BIBREF3" }, { "start": 520, "end": 531, "text": "(Kay, 1979)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "Different from those previous approaches for Discourse Planner (Reiter et al., 1999) , we solved the EG for MWP problem through first buildings the Explanation Tree, which is particularly suitable for representing math based problems. The OP oriented algorithm is then proposed for solving the discourse planning work in MWP. Furthermore, different from the FUG proposed by Kay (Kay, 1979) , the Function Word Insertion and Ordering Module adopts the OP based template for our Surface Realizer.", "cite_spans": [ { "start": 63, "end": 84, "text": "(Reiter et al., 1999)", "ref_id": "BIBREF14" }, { "start": 378, "end": 389, "text": "(Kay, 1979)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "Since the EG for MWP differs from that of other NLG applications in that the inference process centers on the mathematical operation, an operator oriented algorithm is required. In the proposed framework, we first introduce the Explanation Tree to explicitly show how the answer of a math problem is acquired. Afterwards, an OP Oriented Algorithm performs sentence segmentation (act as Discourse Planner) for MWP. Lastly, for each operator, a corresponding template is adopted to achieve surface string realization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Our Explanation Generator of MWP solver is able to explain how the answer is obtained in a human comprehensible way, where the related reasoning steps can be systematically explained with fluent natural language. The main contributions of this paper are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Proposing the Explanation Tree for facilitating the discourse planning on MWP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "Proposing an Operator oriented algorithm for structuring output sentence sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Proposing the OP oriented templates for generating final explanation strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "With the advancement of information and communication technology, the information we obtained is very abundant and multivariate. Especially, in the recent 15 years, many type of the Internet media grow up so that people can get large amount of the information in a short time. These internet media include Wikipedia, blogs and the recently popular social medial are usually the long text and have the complete content. While the short text social media, such as Twitter, become very popular in the recent years. The reason is that these short text social media provide a very convenient way to share the people feeling and thinking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Generally, these Internet media deliver the people thinking by using the text. However, the large amount of text on the Internet cause people hard to understand the meaning in a short limit time. To solve the problem, many document summarization technologies have been proposed. Among them, topic models summarize the context in large amount of documents into several topic terms. By reading these topic terms, people will understand the content in a short time. Topic model can be performed by the vector space model or the probability model. In the recent years, the probability models such as Probabilistic Latent Semantic Analysis (pLSA) (Hofmann, 1999) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003) are very popular because the probability models base on the document generation process. The inspirations of the document generation process come from the human written articles. When a person writes an article, he or she will inspire some thinking in mind, then extend these thinking into some related words. Finally, they write down these words to complete an article. Probability topic models simulate the behavior of above document generating process. In the view of the vectorization of the probability topic models, when we have a text corpus, we have known the documents and its words distribution by statistic the word vector. Then, the probability topic models split the document-word matrix into the document-topic and topic-word matrices. The distribution of the document-topic matrix describes that the degree of each document belongs each topic while the topic-word matrix describes the degree of each word belongs each topic. The \"topic\" in these two matrices is the latent factor as the human thinking.", "cite_spans": [ { "start": 642, "end": 657, "text": "(Hofmann, 1999)", "ref_id": "BIBREF20" }, { "start": 696, "end": 715, "text": "(Blei et al., 2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In essence, the topic models capture the word co-occurrence information and these highly co-occurrence words are put together to compose a topic (Divya et al., 2013; Mimno et al., 2011) . So, the key to find out high quality topics is that the corpus must contain a large amount of word co-occurrence information and the topic model has the ability to correctly capture the amount of the word co-occurrence. However, the traditional topic models work well in the long text corpus but work poorly in short text corpus. The reason is that the original intention of LDA is designed to model the long text corpus. Exactly, LDA capture the word co-occurrence in document-level (Divya et al., 2013; Yan et al., 2013) , but there are no enough words to well judge the word co-occurrence in document-level in a short text document. Figure 1 is an example which shows the difference of the topic model in between the long text and short text corpus. In the long text corpus, each document provides a lot of word co-occurrence information, so that LDA can well capture these information to discover the high quality topics. While in the short text document, there are no enough words in a Word Co-occurrence Augmented Topic Model in Short Text 47 single document to discover the word co-occurrence information.", "cite_spans": [ { "start": 145, "end": 165, "text": "(Divya et al., 2013;", "ref_id": "BIBREF22" }, { "start": 166, "end": 185, "text": "Mimno et al., 2011)", "ref_id": "BIBREF23" }, { "start": 672, "end": 692, "text": "(Divya et al., 2013;", "ref_id": "BIBREF22" }, { "start": 693, "end": 710, "text": "Yan et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 824, "end": 832, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To overcome above problems in short text, many researchers consider a simpler topic model, mixture of unigrams model. Mixture of unigrams model samples topics in global corpus level (Nigam et al., 2000; Zhao et al., 2011) . More specifically, the word co-occurrence in document-level means that the amount of the word co-occurrence relation comes from a single document. On the contrary, the word co-occurrence in corpus-level means that the amount of the word co-occurrence relation comes from a full corpus which contains many documents. Mixture of unigrams overcomes the lack of words in the short text documents. Further, Xiaohui Yan et al. proposed the Bi-term Topic Model (BTM) (Yan et al., With the advancement of information and communication technology, the information we obtained is much abundant and multivariate.", "cite_spans": [ { "start": 182, "end": 202, "text": "(Nigam et al., 2000;", "ref_id": "BIBREF25" }, { "start": 203, "end": 221, "text": "Zhao et al., 2011)", "ref_id": "BIBREF26" }, { "start": 684, "end": 696, "text": "(Yan et al.,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example of LDA in the long text and short text corpus", "sec_num": null }, { "text": "Especially in the recent years, many types of the Internet media grows up so that people can get large amount of the information in a short time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example of LDA in the long text and short text corpus", "sec_num": null }, { "text": "With the advancement of information and communication technology, he information we obtained is much abundant and multivariate. Especially n the recent years, many types of the Internet media grows up so that eople can get large amount of the information in a short time. Cheng et al., 2014) which directly model the word co-occurrence and use the corpus-level bi-term to overcome the lack of the text information problem. A bi-term is an unordered word pair co-occurring in a short text document. The major advantage of BTM is that 1) BTM model the word co-occurrence by using the explicit bi-term, and 2) BTM aggregate these word co-occurrence patterns in the corpus for topic discovering (Yan et al., 2013; Cheng et al., 2014) . BTM abandons the document-level directly. A topic in BTM contains several bi-term and a bi-term crosses many documents. BTM emphasizes that the co-occurrence information comes from all bi-terms in whole corpus. However, BTM will make the common words be performed excessively because the frequency of bi-term comes from the whole corpus instead of a short document.", "cite_spans": [ { "start": 272, "end": 291, "text": "Cheng et al., 2014)", "ref_id": "BIBREF27" }, { "start": 691, "end": 709, "text": "(Yan et al., 2013;", "ref_id": "BIBREF24" }, { "start": 710, "end": 729, "text": "Cheng et al., 2014)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example of LDA in the long text and short text corpus", "sec_num": null }, { "text": "In this paper, we solve the frequent bi-term problem in BTM. We propose an approach base on BTM. For the problem in BTM, a simple and intuitive solution is to use pointwise mutual information (PMI) (Church & Hanks, 1990) to decrease the statistical amount of the frequent words in whole corpus. With respect to the frequency of bi-term, the PMI can normalize the score by each single word frequency in the bi-term. Otherwise, the priors in the topic models usually set symmetric. This symmetric priors mean that there is not any preference of words in any specific topic (Wallach et al., 2009) . An intuitive idea is that why not adopt some word co-occurrence information in priors to restrict the generated topics. Base on above two ideas, we propose a novel prior adjustment method, PMI-\u03b2 priors, which first use the PMI to mine the word co-occurrence from the whole corpus. Then, we transform such PMI scores to the priors of BTM. Figure 2 shows the graphical representation of the PMI-\u03b2-BTM.", "cite_spans": [ { "start": 198, "end": 220, "text": "(Church & Hanks, 1990)", "ref_id": "BIBREF28" }, { "start": 571, "end": 593, "text": "(Wallach et al., 2009)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 934, "end": 942, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "In summary, the proposed approach enhance the amount of the word co-occurrence and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "w i \uf05a w j \uf066 \uf066 \uf071 ... \uf061 ... w i \uf05a w j \uf062 ... \uf066 ... w i \uf05a w j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "also based on the original topic model. Basing on the original topic model means we did not modify the model itself, thus our methods can easily apply to some other existing BTM based models, to overcome the short text problem without any modification. To test the performance of our two methods completely, we prepare two different types of short text corpus for the experiments. One is the tweet text and another is the news title. The context of news title dataset is regular and formal while the text in tweet usually contain many noise. Experimental results show our PMI-\u03b2 priors method is better than the BTM in both tweet and news title datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "The remaining of this paper shows below. In Section 2, we show the survey of some traditional topic models and the previous works of topic model to overcome the short text. Section 3 shows our proposed PMI-\u03b2 priors and the re-organized document methods. The experiment results show in Section 4. Finally, we conclude this research in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "Topic Model is a method to find out the hidden semantic topics from the observed documents in the text corpus. Topic Models have been researched several years. Generally, topic model can be performed by the vector space model or the probability model. The early one of the vector space topic model, Latent Semantic Analysis (LSA) (Landauer et al., 1998) , uses the singular value decomposition (SVD) to find out the latent topic. However, LSA does not model the polysemy well and the cost of SVD is very high (Hofmann, 1999; Blei et al., 2003) . Afterward, Thomas Hofmann proposed the one-document-multi-topics model, probabilistic Latent Semantic Analysis (pLSA) (Hofmann, 1999) . pLSA bases on the document generation process which like the human writing. However, the numerous parameters of pLSA cause the overfitting problem and pLSA does not define the generation of the unknown documents. In 2003, Blei et al. proposed a well-known Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , LDA use the prior probability in Bayes theory to extents pLSA and simplify the parameters estimate process in pLSA. Also, the non-zero priors let LDA have the ability to infer the new documents.", "cite_spans": [ { "start": 330, "end": 353, "text": "(Landauer et al., 1998)", "ref_id": "BIBREF30" }, { "start": 509, "end": 524, "text": "(Hofmann, 1999;", "ref_id": "BIBREF20" }, { "start": 525, "end": 543, "text": "Blei et al., 2003)", "ref_id": "BIBREF21" }, { "start": 664, "end": 679, "text": "(Hofmann, 1999)", "ref_id": "BIBREF20" }, { "start": 972, "end": 991, "text": "(Blei et al., 2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "The Survey of the Traditional Topic Models for Normal Text", "sec_num": "2.1" }, { "text": "However, there are some drawbacks in LDA. First, LDA works under the bag-of-word model hypothesis. In the bag-of-word model, each word of the document is no order and independent of others (Wallach, 2006) . The hypothesis compared with the human writing behavior is unreasonable (Divya et al., 2013) . Second, LDA emphasizes the relations between topics are week, but actually, the topics may have hierarchical structure. Third, LDA requires the large number of articles and well-structured long articles to get the high quality topics. Apply LDA on the short text or uncompleted sentences corpus usually get poor results. The fourth drawback is that in spite of the LDA has the concept of the prior probabilities but LDA priors generally set the symmetric values in each prior vector, like <0.1> or <0.01>. The symmetric prior means no bias of each words in the specific topic (Wallach et al., 2009) . In this situation, the priors only provide the smooth technology to avoid the zero probability and the model only use the statistical information from the data to discover the hidden topics.", "cite_spans": [ { "start": 189, "end": 204, "text": "(Wallach, 2006)", "ref_id": "BIBREF32" }, { "start": 279, "end": 299, "text": "(Divya et al., 2013)", "ref_id": "BIBREF22" }, { "start": 878, "end": 900, "text": "(Wallach et al., 2009)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "The Survey of the Traditional Topic Models for Normal Text", "sec_num": "2.1" }, { "text": "To overcome above four drawbacks, many researchers propose new modify models. Such as N-gram Topic Model (Wang et al., 2007) and HMM-LDA (Griffiths et al., 2004) provide the context modeling. Wei Li et al. proposed the Pachinko Allocation Model (PAM) (Li & McCallum, 2006) which adds the super topic concept and make the topic have the hierarchical structure. Otherwise, Zhiyuan Chen et al. apply the must-link and cannot-link information to guide the document generation process which words must or not to be put into a topic (Chen & Liu, 2014) .", "cite_spans": [ { "start": 105, "end": 124, "text": "(Wang et al., 2007)", "ref_id": "BIBREF33" }, { "start": 137, "end": 161, "text": "(Griffiths et al., 2004)", "ref_id": "BIBREF34" }, { "start": 251, "end": 272, "text": "(Li & McCallum, 2006)", "ref_id": "BIBREF35" }, { "start": 527, "end": 545, "text": "(Chen & Liu, 2014)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "The Survey of the Traditional Topic Models for Normal Text", "sec_num": "2.1" }, { "text": "With the rise of social media in recent years, topic models have been utilized for social media analysis. For example, some researches apply topic models in social media for event tracking (Lin et al., 2010) , content characterizing (Zhao et al., 2011; Ramage et al., 2010) , and content recommendation (Chen et al., 2010; Phelan et al., 2009) . However, to share people thinking conveniently, the context is usually short. These short text contexts make topic models hard to discover the amount of word co-occurrence. For the short text corpus, there are three directions to overcome the insufficient of the word co-occurrence problem. One is using the external resources to guide the model generation, another is aggregating several short texts into a long text, and the other is improving the model to satisfy the short text properties. For the first direction, Phan et al. (Phan et al., 2008) proposed a framework that adopt the large external resources (such as Wiki and blog) to deal with the data sparsity problem. R.Z. Michal et al. proposed an author topic model (Rosen-Zvi et al., 2004) which adopt the user information and make the model suitable for specific users. Jin et al. proposed the Dual-LDA model (Jin et al., 2011) , it use not only the short text corpus but also the related long text corpus to generate topics, respectively. The generation process use the long text to help the short text modeling. If the quality of the external long text or knowledge base is high, the generated topic quality will be improve. However, we cannot always obtain the related long text to guide short text and the related long text is very domain specific. So, using external resources is not suitable for the general short text dataset. In addition to adopt the long text, Hong et al. aggregate the tweets which shared the same words and get better results than the original tweet text (Hong & Davison, 2010 ).", "cite_spans": [ { "start": 189, "end": 207, "text": "(Lin et al., 2010)", "ref_id": "BIBREF37" }, { "start": 233, "end": 252, "text": "(Zhao et al., 2011;", "ref_id": "BIBREF26" }, { "start": 253, "end": 273, "text": "Ramage et al., 2010)", "ref_id": "BIBREF38" }, { "start": 303, "end": 322, "text": "(Chen et al., 2010;", "ref_id": "BIBREF39" }, { "start": 323, "end": 343, "text": "Phelan et al., 2009)", "ref_id": "BIBREF40" }, { "start": 877, "end": 896, "text": "(Phan et al., 2008)", "ref_id": "BIBREF41" }, { "start": 1072, "end": 1096, "text": "(Rosen-Zvi et al., 2004)", "ref_id": "BIBREF42" }, { "start": 1217, "end": 1235, "text": "(Jin et al., 2011)", "ref_id": "BIBREF43" }, { "start": 1891, "end": 1912, "text": "(Hong & Davison, 2010", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Models for Short Text", "sec_num": "2.2" }, { "text": "For the model improvement, Wayne et al. use the mixture of unigrams model to model the tweets topics from whole corpus text (Zhao et al., 2011) . Their experimental results verify that the mixture of unigram model can discover more coherent topics than LDA in the short text corpus. Further, Xiaohui Yan et al. proposed the Bi-term Topic Model (BTM) (Yan et al., 2013; Cheng et al., 2014) which directly model the word co-occurrence and use the corpus level bi-term to overcome the lack of the text information problem. A bi-term is a word pair containing a co-occur relation in this two words. The advantage is that BTM can model the general text without any domain specific external data. Comparing with the mixture of unigram, BTM is a special case of the mixture of unigram. They both model the corpus level topic but BTM generates two words (bi-term) every time the generation process. However, BTM discovers the word co-occurrence just by considering the bi-term frequency. The bi-term frequency will be failed to judge the word co-occurrence when the bi-term frequency is high but one of the frequency of two words in a bi-term is high and another is low.", "cite_spans": [ { "start": 124, "end": 143, "text": "(Zhao et al., 2011)", "ref_id": "BIBREF26" }, { "start": 350, "end": 368, "text": "(Yan et al., 2013;", "ref_id": "BIBREF24" }, { "start": 369, "end": 388, "text": "Cheng et al., 2014)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Models for Short Text", "sec_num": "2.2" }, { "text": "Topic models learn topics base on the amount of the word co-occurrence in the documents. The word co-occurrence is a degree which describes how often the two words appear together. BTM, discovers topics from bi-terms in the whole corpus to overcome the lack of local word co-occurrence information. However, BTM will make the common words be performed excessively because BTM identifies the word co-occurrence information by the bi-term frequency in corpus-level. Thus, we propose a PMI-\u03b2 priors methods on BTM. Our PMI-\u03b2 priors method can adjust the co-occurrence score to prevent the common words problem. Next, we will describe the detail of our method of PMI-\u03b2 priors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "We first describe the detail of BTM. First, we introduce the notation of \"bi-term\". Bi-term is the word pair co-occurring in the short text. Any two distinct words in a document construct a bi-term. For example, a document with three terms will generate three bi-term (Yan et al., 2013) :", "cite_spans": [ { "start": 268, "end": 286, "text": "(Yan et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf07b \uf07d 1 2 3 1 2 2 3 1 3 , ,", "eq_num": ", , , , , t" } ], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "Note that each bi-term is unordered. For a real case example, we have a document and the context is \"I visit apple store\". Because \"I\" is a stop-word, we remove it. The remaining three terms \"visit\", \"apple\" and \"store\" will generate three bi-terms \"visit apple\", \"apple store\", and \"visit store\". We generate all possible bi-terms for each document and put all bi-terms in the bi-term set B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "Second, we describe the parameter estimation of the BTM. The aim of the parameter estimation of BTM is to estimate the topic assignment z, the corpus-topic posteriori distribution \uf071 and the topic-word posteriori distribution \uf066. But the Gibbs sampling can integrate \uf071\uf020 and \uf066\uf020 due to use the conjugate priors. Thus, the only one parameter z should be estimate. Clearly, we should assign a suitable topic for each bi-term. The Gibbs sampling equation shows below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( | , , , ) b P z k \uf071 \uf06a \uf0d8 \uf03d \uf0b5 \uf0d7 z B \u03b1 \u03b2 ,", "eq_num": "(2)" } ], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "where z is the topic assignment, k means the kth topic, B is the bi-term set, \uf061 is the corpus-topic prior distribution and \u03b2 is the topic-word prior distribution. The \uf071\uf020 and \uf066\uf020 in Eq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "(2) show following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", , 1 ( ) ( ) k b k K k b k k n n \uf061 \uf071 \uf061 \uf0d8 \uf0d8 \uf03d \uf02b \uf03d \uf02b \uf0e5 ,", "eq_num": "(3)" } ], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 2 2 , ,", "eq_num": ", , 1 1 ( ) ( ) ( ) (" } ], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") k b k k b k t t t t k b k k b k w w w w V V w w w w t t n n n n \uf062 \uf062 \uf06a \uf062 \uf062 \uf0d8 \uf0d8 \uf0d8 \uf0d8 \uf03d \uf03d \uf02b \uf02b \uf03d \uf0b4 \uf02b \uf02b \uf0e5 \uf0e5 ,", "eq_num": "(4)" } ], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "where V is the number of unique words in the corpus, n k,-b is the statistical count for the document-topic distribution, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": ", t w k b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "n \uf0d8 is the statistical count for the document-topic distribution. When the frequency of bi-term is high the two terms in this bi-term tend to be put into the same topic. Otherwise, to overcome the lack of words in a single document BTM abandons the document-level directly. A topic in BTM contains several bi-term and a bi-term crosses many documents. BTM emphasizes that the co-occurrence information comes from all bi-terms in whole corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "However, just consider the frequency of bi-term in corpus-level will generate the topics which contain too many common words. To solve this problem, we consider the Pointwise Mutual Information (PMI) (Church & Hanks, 1990 ). Since the PMI score not only considers the co-occurrence frequency of the two words, but also normalizes by the single word frequency. Thus, we want to apply PMI score in the original BTM. A suitable way to apply PMI scores is modifying the priors in the BTM. The reason is that the priors modifying will not increase the complexity in the generation model and very intuitive. Clearly, there are two kinds of priors in BTM which are \u03b2-prior and \u03b2-priors. The \u03b2-prior is a corpus-topic bias without the data. While the \u03b2-priors are topic-word biases without the data. Applying the PMI score to the \u03b2-priors is the only one choice because we can adjust the degree of the word co-occurrence by modifying the distributions in the \u03b2-priors. For example, we assume that a topic contains three words \"pen\", \"apple\" and \"banana\". In the symmetric priors, we set <0.1, 0.1, 0.1> which means no bias of these three words, while we can apply <0.1, 0.5, 0.5> to enhance the word co-occurrence of \"apple\" and \"banana\". Thus the topic will prefer to put the \"apple\" and \"banana\" together in the topic sampling step. Figure 3 shows our PMI-\u03b2-priors approach. After pre-procession, we first calculate the PMI score of each bi-term as ( , ) PMI( , ) log ( ) ( )", "cite_spans": [ { "start": 200, "end": 221, "text": "(Church & Hanks, 1990", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 1327, "end": 1335, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "t t t t t t t t \uf0de", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x y x y x y p w w w w p w p w \uf03d ,", "eq_num": "(5)" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "Because the priors can view as an additional statistics count of the target probability, the value ordinarily should be greater than or equal to zero. Thus, we adjust the value of NPMI to [0, 2] by adding one as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PMI( , ) NPMI( , ) 1 log ( , ) x y x y x y w w w w p w w \uf03d \uf02b \uf02d .", "eq_num": "(6)" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "After getting the NPMI scores, we transform these scores to meet the \u03b2-priors. Let \u03b2 SYM is the original symmetric \u03b2-priors and the PMI \u03b2-priors, denote \u03b2 PMI , define as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", SYM PMI 0.1 NPMI( , ) x y w w x y w w \uf062 \uf062 \uf03d \uf02b \uf0b4 .", "eq_num": "(7)" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "There is a constant value 0.1 in Eq. (7) . This constant value 0.1 prevent the target probability being dominated by the priors. The partial of the word co-occurrence information should still be captured by the original model and the priors provide the additional information to enhance the word co-occurrence in the model. The following shows how we apply PMI-\u03b2 -priors into the BTM. We apply the \u03b2 PMI of w1 and w2 in Eq. 6 ", "cite_spans": [ { "start": 37, "end": 40, "text": "(7)", "ref_id": "BIBREF111" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) ( ) k b k b t t t t k b k k b k w w w w w w V V w w w w t t n n n n \uf062 \uf062 \uf06a \uf062 \uf062 \uf0d8 \uf0d8 \uf0d8 \uf0d8 \uf03d \uf03d \uf02b \uf02b \uf03d \uf0b4 \uf02b \uf02b \uf0e5 \uf0e5 .", "eq_num": ", , 1 1" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "Finally, we sample topic assignments by Gibbs sampling (Liu, 1994) approach.", "cite_spans": [ { "start": 55, "end": 66, "text": "(Liu, 1994)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "How to justly evaluate the quality of the topic model is still a problem. The reason is that the topic model is an unsupervised method. There are no prominent words or labels can directly assign to each topic. Thus, many researchers apply topic model in other applications, such as clustering, classification and information retrieval (Blei et al., 2003; Yan et al., 2013) . In classification task, instead of using the original word vectors to identify the document categories, it use the reduced vectors which generating from the topic model. The topic model plays as a dimensional reduction role and the classification result shows how well the model to represent the original features. Topic model can also look as the document clustering approach by just considering a document assign to which topic(s). In this paper, we evaluate topic models by clustering and classification tasks. Otherwise, to make our experiment more robust, we adopt two different types of short text dataset -Twitter2011 and ETtoday Chinese news title. The properties of these two corpus are different. The text of ETtoday Chinese news title is very regular, while the text of Twitter2011 usually contains emotional words, simplified texts and some unformed words. For example, \"haha\" is the emotional word, and \"agreeeee\" is the unformed word. Table 1 shows the statistics of short text datasets. The number of average words per document is not more than ten words. The number of documents in each class are shown in Figure 4 . The property of both two dataset is skew. The skew dataset may cause the results that the fewer documents are dominated by the larger one. In summary, the challenges of these two datasets are not only the short text problem but also the unbalance category. The top-3 classes in the Twitter2011 dataset are \"#jan25\", \"#superbowl\" and \"#sotu\". And the top-3 classes in the ETtoday News Title dataset are \"entertainment\", \"physical\" and \"political\". ", "cite_spans": [ { "start": 335, "end": 354, "text": "(Blei et al., 2003;", "ref_id": "BIBREF21" }, { "start": 355, "end": 372, "text": "Yan et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 1324, "end": 1331, "text": "Table 1", "ref_id": "TABREF13" }, { "start": 1497, "end": 1505, "text": "Figure 4", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "All of the experiments were done on the Intel i7 3.4 GHz CPU and 16G memory PC. All of the pre-process and topic models were written by JAVA code. The parameters \uf061\uf020 priors and the base \u03b2 priors of topic models are all set <0.1>. The number of iterations in Gibbs sampling is set 1,000. To make our results more reliable, we run each experiments 10 times and average these scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "For the clustering experiment, we first get the document-topic posteriori probability distribution \uf066\uf020 and we use the highest probability topic P(z|d) as the cluster assignment for each document in \uf066. For the classification experiment, we divide our dataset into five parts in which four parts for training and one for testing. After training the topic model, we fix the topic-word distribution \uf066\uf020\uf020and then we re-infer document-topic posteriori probability Class ID for ETtoday Dataset distribution \uf071\uf020 of all original short text documents. Instead of using the original word vectors to do the classification task, we take this re-inferred posteriori probability distribution \uf071\uf020 as the reduced feature matrix. Finally we use this reduced feature matrix to classify the documents by LIBLINEAR 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We compare our methods with the previous topic models: 1) LDA, 2) Mixture of unigrams, and 3) BTM. In addition to the above three topic models, we also compare with our PCA-\u03b2 priors methods. We use the principal component analysis (PCA) to discover the whole corpus principal component. Then, we transform the principal component to the topic-word prior distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "In this part, we list three criteria for the clustering experiment and one for classification. In the clustering experiment, let \uf057 = {\uf077 1 , \uf077 2 , ... , \uf077 K } is the output cluster labels, and C = {c 1 , c 2 , ... , c p } is the gold standard labels of the documents. We first describe the three criteria for the clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria", "sec_num": "4.2" }, { "text": "Purity is a simple and transparent measure which perform the accuracy of all cluster assignments as the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Purity", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max Purity( ,C) k j j k c N \uf076 \uf0c7 \uf057 \uf03d \uf0e5 ,", "eq_num": "(9)" } ], "section": "\uf0b7 Purity", "sec_num": null }, { "text": "where N is the total number of documents. Note that the high purity is easy to achieve when the number of clusters is large. In particular, purity is 1 if each document gets its own cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Purity", "sec_num": null }, { "text": "NMI score is based on the information theory. Let I(\uf057, C) denotes the mutual information between the output cluster \uf057\uf020 and the gold standard cluster C. The mutual information of NMI is normalized by each entropy denoted H(\uf057\uf029 and H(C). This normalization can avoid the influence of the number of clusters. The equation of NMI shows following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I( ,C) NMI( ,C) [H( ) H( )] 2 C \uf057 \uf057 \uf03d \uf057 \uf02b ,", "eq_num": "(10)" } ], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "where \uf049\uf028\uf057\uf02c\uf020C\uf029,\uf020\uf048\uf028\uf057\uf029\uf020and H\uf028\uf057\uf029 denote: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "TP TN RI TP FP FN TN \uf02b \uf03d \uf02b \uf02b \uf02b ,", "eq_num": "(13)" } ], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "where TP, FP, FN, and TN are the true positive count, false positive count, false negative count and true negative count respectively. For the classification experiment, we adopt the accuracy as the measure. The definition of the accuracy is the same as the RI score in Eq. 13, but just change the cluster label to the classification label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "The Twitter2011 dataset was published in TREC 2011 microblog track 2 . It contains approximately 16 million tweets sampled between January 23rd and February 8th, 2011. It is worth mentioning that there are some semantics tags, called hashtag, in some tweets. The hashtags had been given when the author wrote a tweet. Because these hashtags can identify the semantics of tweets, we use the hashtags as our ground truth for both clustering and classification experiments. However, there are about 10 percentages of all tweets contain hashtags and some hashtags are very rare. Also, there are contains multilingual tweets. To reduce the effect of noise in this dataset, we just extract the English tweets with top-50 frequent hashtags. After tweet extraction, we totally get the 49,461 tweets. Then, we remove the hashtags and stop-words from the context. Finally, we stem all the words in all tweets by the English stemming in the Snowball library. Table 2 shows the clustering results on the Twitter2011 dataset, when we set the number of topic to 50. As expected, BTM is better than Mixture of unigram and LDA got the worst result when we adopt the symmetric priors <0.1>. When apply the PMI-\u03b2 priors, we get the better result than BTM with symmetric priors. Otherwise, our baseline method, PCA-\u03b2, is better than the original LDA because the PCA-\u03b2 prior can make up the lack of the global word co-occurrence information in the original LDA. Figure 5 shows the classification results on the Twitter2011 dataset by using LIBLINEAR classifier. When apply the PMI-\u03b2 priors, we get the better result than BTM with symmetric priors. Table 3 presents the top-10 topic words of the \"job\" topic in the Twitter2011 dataset for LDA, mixture of unigram, BTM and PMI-\u03b2-BTM respectively, when the number of topic is 70. The top-10 words are the 10 highest probability words of the topics. The bold words in this table are the words which highly correlated with the topic by the human judgment. The topic words in the LDA and mixture of unigram models are almost non-correlated or low-correlated with the topic \"job\", such as \"jay\" and \"emote\". In BTM and PMI-\u03b2-BTM, the model capture the more high-correlated words, such as \"engineer\" and \"management\". ", "cite_spans": [], "ref_spans": [ { "start": 948, "end": 955, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1442, "end": 1450, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 1628, "end": 1635, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experimental Results for the Twitter2011 Dataset", "sec_num": "4.3" }, { "text": "The ETtoday News Title dataset is collected from the overview list of the ETtoday News website 3 between January 1st and January 31, 2015. There are totally 25 predefined news labels in the dataset. These labels include some classical news category such as \"society news\", \"international news\" and \"political news\", and some special news category such as \"animal and pets\", \"3C\" and \"games\". In both the clustering and the classification experiments, we use these labels as the ground-truth. Because the Chinese text does not contain the break word, we must adopt the additional word breaker in the pre-process step. We adopt the jieba 4 , the Python Chinese word segmentation module, to segment all news title into several words. Figure 6 shows the classification results on the ETtoday News Title dataset. The three original topic model LDA, mixture of unigram, and BTM perform the same order as the results of the Tweet2011 dataset. The PMI-\u03b2 BTM is outperform all other methods. Our PMI-\u03b2-BTM is also suitable to model the regular short text. The top-10 topic words of the \"baseball\" topic of ETtoday news title dataset lists in the Table 4 . Because these words are almost Chinese, we also attach the simple explanation in English. There are many non-related words in the LDA and mixture of unigram, such as \"\u5e74\u7d42\" (Year-end bonuses) and \"\u4e0d\" (no). Especially, we compare the topic words in BTM with in PMI-\u03b2-BTM, the topic words in BTM contain some frequent but low-correlated words with the topic, such as \"\u5e74\" (means year) and \"\u842c\" (means ten thousand). While in the PMI-\u03b2-BTM, this noisy words do not appear. The reason is that the original BTM just consider the simple bi-term frequency and this bi-term frequency make some frequent words be extracted together with other words from the document. Our PMI-\u03b2 priors can decrease the probability of the common words by the word normalization effect in the PMI. ", "cite_spans": [ { "start": 95, "end": 96, "text": "3", "ref_id": "BIBREF107" }, { "start": 636, "end": 637, "text": "4", "ref_id": "BIBREF108" } ], "ref_spans": [ { "start": 731, "end": 739, "text": "Figure 6", "ref_id": "FIGREF10" }, { "start": 1137, "end": 1144, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results for ETtoday News Title Dataset", "sec_num": "4.4" }, { "text": "In this paper, we propose a solution for topic model to enhance the amount of the word co-occurrence relation in the short text corpus. First, we find the BTM identifies the word co-occurrence by considering the bi-term frequency in the corpus-level. BTM will make the The number of topic K LDA Mix BTM PMI-beta BTM common words be performed excessively because the frequency of bi-term comes from the whole corpus instead of a short document. We propose a PMI-\u03b2 priors method to overcome this problem. The experimental results show our PMI-\u03b2-BTM get the best results in the regular short news title text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Moreover, there are two advantages in our methods. We do not need any external data and the proposed two improvement of the word co-occurrence methods are both based on the original topic model and easy to extend. Bases on the original topic model means we did not modify the model itself, thus our methods can easily apply to some other existing BTM based models to overcome the short text problem without any modification. In the future, we can extend some other steps in PMI-priors to deal the further improvement, such as removing the redundant documents by clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "The rapidly increasing availability of multimedia associated with spoken documents on the Internet has prompted automatic spoken document summarization to be an important research subject. Thus far, the majority of existing work has focused on extractive spoken document summarization, which selects salient sentences from an original spoken document according to a target summarization ratio and concatenates them to form a summary concisely, in order to convey the most important theme of the document. On the other hand, there has been a surge of interest in developing representation learning techniques for a wide variety of natural language processing (NLP)-related tasks. However, to our knowledge, they are largely unexplored in the context of extractive spoken document summarization. With the above background, this study explores a novel use of both word and sentence representation techniques for extractive spoken document summarization. In addition, three variants of sentence ranking models building on top of such representation techniques are proposed. Furthermore, extra information cues like the prosodic features extracted from spoken documents, apart from the lexical features, are also employed for boosting the summarization performance. A series of experiments conducted on the MATBN broadcast news corpus indeed reveal the performance merits of our proposed summarization methods in relation to several state-of-the-art baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "Keywords: Spoken Document, Extractive Summarization, Word Representation, Sentence Representation, Prosodic Feature (Luhn, 1958 )\u3002\u81ea\u52d5\u6458\u8981\u6280\u8853\u53ef\u6982\u5206\u70ba\u7bc0\u9304\u5f0f (Extractive)\u6458\u8981\u4ee5\u53ca\u62bd\u8c61\u5f0f(Abstractive)\u6458\u8981\u3002\u524d\u8005\u4e3b\u8981\u662f\u4f9d\u64da\u7279\u5b9a\u7684\u6458\u8981\u6bd4\u4f8b\uff0c\u5f9e\u539f\u59cb \u7684\u6587\u4ef6\u4e2d\u9078\u53d6\u91cd\u8981\u7684\u8a9e\u53e5\u5b50\u96c6(Sentence Subset)\uff0c\u900f\u904e\u8a72\u8a9e\u53e5\u5b50\u96c6\u7c21\u6f54\u5730\u8868\u793a\u539f\u59cb\u6587\u4ef6\u7684 \u5927\u81f4\u5167\u5bb9\uff1b\u800c\u5f8c\u8005\u662f\u5728\u5b8c\u5168\u7406\u89e3\u6587\u4ef6\u5167\u5bb9\u4e4b\u5f8c\uff0c\u91cd\u65b0\u64b0\u5beb\u7522\u751f\u6458\u8981\u4f86\u4ee3\u8868\u539f\u59cb\u6587\u4ef6\u7684\u5167 \u5bb9\u3002\u96d6\u7136\u62bd\u8c61\u5f0f\u6458\u8981\u662f\u6700\u70ba\u8cbc\u8fd1\u4eba\u5011\u65e5\u5e38\u64b0\u5beb\u6458\u8981\u7684\u5f62\u5f0f\uff0c\u4f46\u5176\u6d89\u53ca\u6df1\u5c64\u7684\u81ea\u7136\u8a9e\u8a00\u8655 \u7406\u80fd\u529b (Mitra et al., 1997) \uff0c\u8f03\u70ba\u56f0\u96e3\u8a31\u591a\uff1b\u76ee\u524d\u5927\u591a\u6578\u7684\u7814\u7a76\u4e3b\u8981\u96c6\u4e2d\u5728\u7bc0\u9304\u5f0f\u6458\u8981\u7684 \u81ea\u52d5\u7522\u751f (Jones, 1999) (Chen et al., 2009) \uff1a ", "cite_spans": [ { "start": 116, "end": 127, "text": "(Luhn, 1958", "ref_id": "BIBREF51" }, { "start": 329, "end": 349, "text": "(Mitra et al., 1997)", "ref_id": "BIBREF55" }, { "start": 383, "end": 396, "text": "(Jones, 1999)", "ref_id": null }, { "start": 397, "end": 416, "text": "(Chen et al., 2009)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "1. \u7dd2\u8ad6 \u5de8\u91cf\u8cc7\u6599\u5145\u65a5\u8457\u73fe\u4eca\u7684\u4e16\u754c\uff0c\u5728\u5168\u7403\u8cc7\u8a0a\u7db2(World Wide Web)\u4e2d\u5df2\u5b58\u5728\u6709\u6578\u5341\u5104\u7bc7\u7db2\u9801\uff0c \u4e26\u4e14\u4ee5\u6307\u6578\u7684\u500d\u6578\u6301\u7e8c\u6210\u9577\u8457\u3002\u70ba\u6b64\uff0c\u4eba\u5011\u5fc5\u9808\u4ef0\u8cf4\u53ca\u6642\u6458\u8981\u5404\u985e\u8cc7\u8a0a\u7684\u81ea\u52d5\u5316\u5de5\u5177\uff0c \u4ee5 \u6e1b \u7de9 \u8cc7 \u8a0a \u904e \u8f09 (Information Overload) \u7684 \u554f \u984c \u3002 \u9019 \u4e9b \u8feb \u5207 \u7684 \u9700 \u6c42 \u4fc3 \u4f7f \u4e86 \u81ea \u52d5 \u6458 \u8981 (Automatic Summarization)\u6280\u8853\u7684\u84ec\u52c3\u767c\u5c55", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "| | \u221d | (1) \u5176\u4e2d P(D)\u5c0d\u65bc\u6bcf\u4e00\u8a9e\u53e5\u7686\u76f8\u540c\uff0c\u6545\u53ef\u5ffd\u7565\u3002\u800c\u6211\u5011\u5047\u8a2d\u6bcf\u4e00\u8a9e\u53e5 S \u7684\u4e8b\u524d\u6a5f\u7387 P(S)\u70ba\u4e00 \u500b\u5747\u52fb\u5206\u4f48(Uniform Distribution)\uff0c\u56e0\u6b64 P(S)\u4ea6\u53ef\u5ffd\u7565\u3002\u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c\u7531\u65bc\u6587\u4ef6\u4e2d\u7684\u8a9e \u53e5\u901a\u5e38\u8f03\u70ba\u7c21\u77ed\uff0c\u4e0d\u5bb9\u6613\u5efa\u7acb\u4e00\u500b\u6e96\u78ba\u7684\u6a21\u578b\u4f86\u5b8c\u6574\u5730\u63cf\u8ff0\u6bcf\u4e00\u8a9e\u53e5\u7684\u5167\u5bb9\u6db5\u610f\u3002\u70ba\u6b64\uff0c \u6709 \u7814 \u7a76 \u5b78 \u8005 \u9678 \u7e8c \u63d0 \u51fa \u5404 \u5f0f \u8f03 \u70ba \u5f37 \u5065 \u6027 \u7684 \u8a9e \u8a00 \u6a21 \u578b \uff0c \u4f8b \u5982 \u95dc \u806f \u6a21 \u578b (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "\u5176\u4e2d c \u70ba\u4e2d\u9593\u8a5e \u7684\u4e0a\u4e0b\u6587\u4e4b\u7a97\u53e3\u5927\u5c0f(Window Size)\uff0cT \u4ee3\u8868\u8a13\u7df4\u8a9e\u6599\u7684\u9577\u5ea6\uff0c\u4e14 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| , \u2026 , ,", "eq_num": ", \u2026 ," } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176 \u4e2d c \u70ba \u4e2d \u9593 \u8a5e \u7684 \u4e0a \u4e0b \u6587 \u4e4b \u7a97 \u53e3 \u5927 \u5c0f (Window Size) \uff0c \u800c \u689d \u4ef6 \u6a5f \u7387 (Conditional Probability)\u7d93\u7531\u4e0b\u5f0f\u8a08\u7b97\uff1a \u2022 \u2211 \u2022", "eq_num": "(9)" } ], "section": "Abstract", "sec_num": null }, { "text": "\u5176\u4e2d \u8207 \u5206\u5225\u70ba\u4f4d\u7f6e \u53ca t \u7684\u8a5e\u8868\u793a\u6cd5\u3002\u5728 CBOW \u8207 SG \u7684\u5be6\u4f5c\u4e2d\u7686\u5f15\u5165\u968e\u5c64\u8edf \u5f0f\u6700\u5927\u5316\u6cd5 (Mikolov et al., 2013b; Morin & Bengio, 2005) \u53ca\u8ca0\u4f8b\u63a1\u6a23\u6cd5 (Mikolov et al., 2013b; Mnih & Kavukcuoglu, 2013) ", "cite_spans": [ { "start": 50, "end": 73, "text": "(Mikolov et al., 2013b;", "ref_id": "BIBREF53" }, { "start": 74, "end": 95, "text": "Morin & Bengio, 2005)", "ref_id": "BIBREF57" }, { "start": 103, "end": 126, "text": "(Mikolov et al., 2013b;", "ref_id": "BIBREF53" }, { "start": 127, "end": 152, "text": "Mnih & Kavukcuoglu, 2013)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u7531\u65bc\u5411\u91cf\u7a7a\u9593\u6a21\u578b\u7c21\u55ae\u3001\u76f4\u89c0\u4e14\u6709\u6548\uff0c\u56e0\u6b64\u88ab\u5ee3\u6cdb\u5730\u61c9\u7528\u65bc\u5404\u5f0f\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u7684\u76f8\u95dc\u7814 \u7a76\u3002\u85c9\u52a9\u65bc\u8a5e\u8868\u793a\u6cd5\u6a21\u578b(\u4f8b\u5982 CBOW \u8207 SG)\u6211\u5011\u53ef\u4ee5\u5c07\u6587\u4ef6\u6216\u8a9e\u53e5\u4e2d\u6240\u6709\u8a5e\u6240\u5c0d\u61c9\u7684 \u8a5e\u8868\u793a\u6cd5\u52a0\u7e3d\u5f8c\u53d6\u5e73\u5747\uff0c\u4f5c\u70ba\u8a72\u7bc7\u6587\u4ef6\u6216\u8a9e\u53e5\u7684\u8868\u793a\u6cd5\uff1a \u2211 \u2208 | | , \u2211 \u2208 | | (11) \u5176\u4e2d \u70ba\u8a5e \u7684\u8a5e\u8868\u793a\u6cd5\uff0c \u3001 \u70ba\u4ee3\u8868\u6587\u4ef6 D \u8207\u8a9e\u53e5 S \u7684\u8868\u793a\u6cd5\uff0c|D|\u3001|S|\u70ba\u6587\u4ef6 D \u53ca \u8a9e\u53e5 S \u9577\u5ea6\u3002\u6216\u662f\u76f4\u63a5\u85c9\u7531\u8a9e\u53e5\u8868\u793a\u6cd5\u6a21\u578b\u6c42\u5f97\u6587\u4ef6\u6216\u8a9e\u53e5\u7684\u5411\u91cf\u8868\u793a\u6cd5\uff1a ,", "eq_num": "(12)" } ], "section": "\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)", "sec_num": "4.1" }, { "text": "\u63a5\u8457\uff0c\u900f\u904e\u7dda\u6027\u7d44\u5408(Linear Combination)\u7684\u65b9\u5f0f\uff0c\u53ef\u4ee5\u5f62\u6210\u4e00\u500b\u8907\u5408\u5f0f\u7684\u8a9e\u53e5\u8a9e\u8a00\u6a21\u578b\uff0c \u800c\u6587\u4ef6\u7684\u751f\u6210\u6a5f\u7387\u5c31\u53ef\u4ee5\u7d93\u7531\u4e0b\u5f0f\u8a08\u7b97\uff1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P | \u03bb \u2022 | \u2022 \u2208 1 \u03bb \u2022 , \u2208", "eq_num": "(15)" } ], "section": "\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176\u4e2d | \u70ba\u4e00\u500b\u6b0a\u91cd\u4fc2\u6578\uff0c\u4ee3\u8868\u8a5e \u51fa\u73fe\u5728\u8a9e\u53e5\u4e2d\u7684\u51fa\u73fe\u7684\u53ef\u80fd\u6027\uff1b\u4e26\u4e14\uff0c\u70ba\u4e86\u89e3\u6c7a \u8cc7\u6599\u7a00\u758f\u7684\u554f\u984c\uff0c\u6211\u5011\u900f\u904e\u80cc\u666f\u8a9e\u8a00\u6a21\u578b \u5c0d\u8a9e\u53e5\u6a21\u578b\u9032\u884c\u6a5f\u7387\u5e73\u6ed1\u5316\u3002\u53e6\u4e00\u65b9\u9762\uff0c \u7576\u4f7f\u7528\u8a9e\u53e5\u8868\u793a\u6cd5\u6642\uff0c\u6211\u5011\u9996\u5148\u70ba\u6bcf\u4e00\u500b\u8a9e\u53e5 S \u5efa\u69cb\u51fa\u4e00\u500b\u4ee5\u8a9e\u53e5\u8868\u793a\u6cd5\u70ba\u57fa\u790e\u7684\u8a9e\u8a00 \u6a21\u578b\uff0c\u7528\u4ee5\u9810\u6e2c\u4e00\u500b\u8a5e \u767c\u751f\u7684\u53ef\u80fd\u6027\uff1a \u2022 \u2211 \u2022 \u2208", "eq_num": "(16)" } ], "section": "\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)", "sec_num": "4.1" }, { "text": "\u5176\u4e2d \u662f\u4ee5 PV-DBOW \u6216 PV-DM \u6240\u6c42\u5f97\u7684\u8a9e\u53e5\u8868\u793a\u6cd5\u3002\u540c\u6a23\u5730\uff0c\u6587\u4ef6\u7684\u751f\u6210\u6a5f\u7387\u5c31\u53ef \u4ee5\u7d93\u7531\u4e0b\u5f0f\u8a08\u7b97\uff1a \u8a08\u7b97\u8a9e\uf906\u4e2d\u6240\u5305\u542b\u505c\u7528\u8a5e\u7684\u6578\u91cf\uff0c\u5982\u4e2d\u6587\u8a5e\u7684\"\u4e86\"\u3001\"\u7684\"\u7b49\u8a5e\uff0c\u4ee5\u53ca\u82f1\u6587\u8a5e\u5982\"a\"\u3001 \"the\"\u7b49\u8a5e\uff0c\u5373\u4f7f\u51fa\u73fe\u7684\u983b\uf961\u5f88\u9ad8\uff0c\u4f46\u901a\u5e38\uf967\u5177\u6709\u592a\u591a\u8cc7\u8a0a\uff0c\u56e0\u6b64\u5728\u6aa2\uf96a\u904e\u7a0b\u4e2d\u7d93\u5e38\u88ab \uf984\u9664\uff0c\uf967\uf99c\u5165\u641c\u5c0b\u7684\u8003\u616e\u7bc4\u570d\u3002 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P | \u03bb \u2022 1 \u03bb \u2022 , \u2208", "eq_num": "(17" } ], "section": "\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)", "sec_num": "4.1" }, { "text": "\u6bd4\u5c0d\u7684\u65b9\u5f0f\uff0c\u4e0d\u6703\u7522\u751f\u8a9e\u53e5\u908a\u754c\u5b9a\u7fa9\u7684\u554f\u984c\uff0c\u4e14\u9069\u5408\u7528\u65bc\u591a\u4efd\u4eba\u5de5\u6458\u8981\u7684\u8a55\u4f30\u3002\u6211\u5011\u4f7f \u7528\u4e86\u8f03\u666e\u904d\u7684 ROUGE-1(Unigram)\u3001ROUGE-2(Bigram)\u4ee5\u53ca ROUGE-L(Longest Common Subsequence, LCS)\u5206\u6578\uff0c\u5176\u4e2d ROUGE-1 \u662f\u8a55\u4f30\u81ea\u52d5\u6458\u8981\u7684\u8a0a\u606f\u91cf\uff0cROUGE-2 \u662f\u8a55\u4f30\u81ea \u52d5\u6458\u8981\u7684\u6d41\u66a2\u6027\uff0cROUGE-L \u662f\u6700\u9577\u5171\u540c\u5b57\uf905\u3002ROUGE-N \u662f\u81ea\u52d5\u6458\u8981\u548c\u4eba\u5de5\u6458\u8981\u4e4b\u9593 N \u9023\u8a5e(N-gram)\u7684\u53ec\u56de\u7387\uff0c\u4eba\u5de5\u6a19\u8a18\u7684\u53c3\u8003\u6458\u8981\u70ba\u4e00\u96c6\u5408 R\uff0c\u6545 ROUGE-N \u8a08\u7b97\u516c\u5f0f\u5982\u4e0b(Lin, 2004)\uff1a ROUGE \u2211 \u2211 \u2208 \u2208 \u2211 \u2211 \u2208 \u2208 (18) \u5176\u4e2d sum \u70ba\u4eba\u5de5\u6458\u8981\u96c6\u5408 R \u4e2d\u7684\u4efb\u4e00\u500b\u6458\u8981\uff0cN \u4ee3\u8868\u8a5e\u5f59\u4e32\u4e4b\u9023\u7e8c\u9577\u5ea6\uff0c\u800c \u662f N \u9023\u8a5e\u540c\u6642\u51fa\u73fe\u65bc\u81ea\u52d5\u6458\u8981\u8207\u4eba\u5de5\u6458\u8981\u7684\u6700\u5927\u6578\u91cf\u3002ROUGE-L \u7684\u8a08\u7b97\u65b9\u5f0f\u8207 ROUGE-N \u76f8\u4f3c\uff0c\u4f46\u524d\u8005\u50c5\u8003\u616e\u81ea\u52d5\u6458\u8981\u8207\u53c3\u8003\u6458\u8981\u7684\u6700\u9577\u5171\u540c\u5b57\u4e32\u3002 7. \u5be6\u9a57\u7d50\u679c 7.1 \u57fa\u790e\u6587\u4ef6\u6458\u8981\u4e4b\u5be6\u9a57\u7d50\u679c \u8868 2 \u70ba\u6e2c\u8a66\u96c6\u4e2d\u7684\u6587\u5b57\u6587\u4ef6(TD)\u8207\u8a9e\u97f3\u6587\u4ef6(SD)\u5728 ROUGE-1\u3001ROUGE-2 \u4ee5\u53ca ROUGE-L \u8a55\u4f30\u4e0b\u7684\u6458\u8981\u7d50\u679c\uff1b\u5728\u6b64\u6211\u5011\u9032\u884c\u5404\u5f0f\u7684\u57fa\u790e\u6458\u8981\u65b9\u6cd5\u7684\u6bd4\u8f03\uff0c\u5305\u542b\u524d\u5c0e\u65b9\u6cd5(LEAD)\u3001 \u5411\u91cf\u7a7a\u9593\u6a21\u578b(VSM)\u3001\u6700\u5927\u908a\u969b\u95dc\u806f\u6cd5(MMR)\u3001\u6f5b\u85cf\u8a9e\u610f\u5206\u6790(LSA)\u3001\u55ae\u9023\u8a9e\u8a00\u6a21\u578b(ULM)\u3001 \u95dc\u806f\u6a21\u578b(RM)\u3001Okapi", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)", "sec_num": "4.1" }, { "text": "The performance of an automatic speech recognition (ASR) system often deteriorates sharply due to the interference from varying environmental noise. As such, the development of effective and efficient robustness techniques has long been a challenging research subject in the ASR community. In this article, we attempt to obtain noise-robust speech features through modulation spectrum processing of the original speech features. To this end, we explore the use of nonnegative matrix factorization (NMF) and its extensions on the magnitude modulation spectra of speech features so as to distill the most important and noise-resistant information cues that can benefit the ASR performance. The main contributions include three aspects: 1) we leverage the notion of sparseness to obtain more localized and parts-based representations of the magnitude modulation spectra with fewer basis vectors; 2) the prior knowledge of the similarities among training utterances is taken into account as an additional constraint during the NMF derivation; and 3) the resulting encoding vectors of NMF are further normalized so as to further enhance their robustness of representation. A series of experiments conducted on the Aurora-2 benchmark task demonstrate that our methods can deliver remarkable improvements over the baseline NMF method and achieve performance on par with or better than several widely-used robustness methods. (Sun et al., 2007) \u3001\u5206\u983b\u5f0f\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u6b63\u898f\u5316\u6cd5(Sub-Band Modulation Spectrum Compensation) (Huang et al., 2009) \u8207 \u5176 \u5b83 \u4e00 \u7cfb \u5217 \u8cc7 \u6599 \u5c0e \u5411 (Data-Driven)\u4e4b\u6642\u9593\u5e8f\u5217\u6ffe\u6ce2\u5668\u6cd5 (Xiao et al., 2008; Hermansky & Morgan, 1994) ", "cite_spans": [ { "start": 1418, "end": 1436, "text": "(Sun et al., 2007)", "ref_id": "BIBREF83" }, { "start": 1495, "end": 1515, "text": "(Huang et al., 2009)", "ref_id": "BIBREF75" }, { "start": 1559, "end": 1578, "text": "(Xiao et al., 2008;", "ref_id": "BIBREF86" }, { "start": 1579, "end": 1604, "text": "Hermansky & Morgan, 1994)", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "\u9664\u4e86\u8981\u6b63\u898f\u5316\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u4e4b\u5e73\u5747\u503c\u5916\uff0c\u4e5f\u53ef\u540c\u6642\u6b63\u898f\u5316\u5176\u6a19\u6e96\u5dee (Huang et al., 2009 )\u3002\u5047\u8a2d\u7279\u5fb5\u5411\u91cf\u53c3\u6578\u4e4b\u5e73\u5747\u503c\u8207\u8b8a\u7570\u6578\u5728\u4e00\u822c\u74b0\u5883\u4e2d\u5206\u5e03\u7684\u6bd4\u4f8b\u63a5\u8fd1\u4e00\u81f4\u6642\uff0c\u6211\u5011 \u8abf\u8b8a\u983b\u8b5c\u5206\u89e3\u6280\u8853\u65bc\u5f37\u5065\u8a9e\u97f3\u8fa8\u8b58\u4e4b\u7814\u7a76 91", "cite_spans": [ { "start": 32, "end": 51, "text": "(Huang et al., 2009", "ref_id": "BIBREF75" } ], "ref_spans": [], "eq_spans": [], "section": "\u8abf \u8b8a \u983b \u8b5c \u5e73 \u5747 \u8207 \u8b8a \u7570 \u6578 \u6b63 \u898f \u5316 \u6cd5 (Spectral Mean and Variance Normalization, SMVN)", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u53ef\u4ee5\u540c\u6642\u5c0d\u5176\u5e73\u5747\u503c\u548c\u6a19\u6e96\u5dee\u4f86\u9032\u884c\u6b63\u898f\u5316\uff1a | |", "eq_num": "(3)" } ], "section": "\u8abf \u8b8a \u983b \u8b5c \u5e73 \u5747 \u8207 \u8b8a \u7570 \u6578 \u6b63 \u898f \u5316 \u6cd5 (Spectral Mean and Variance Normalization, SMVN)", "sec_num": "2.3" }, { "text": "\u5728\u5f0f 3\u4e2d\uff0c \u8207 \u70ba\u55ae\u4e00\u8a9e\u53e5\u7684\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u4e4b\u5e73\u5747\u503c\u8207\u6a19\u6e96\u5dee\uff1b \u8207 \u70ba\u6240\u6709\u8a13 \u7df4\u8a9e\u53e5\u7684\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u4e4b\u5e73\u5747\u503c\u8207\u6a19\u6e96\u5dee\uff0c \u4fbf\u662f\u66f4\u65b0\u904e\u5f8c\u7684\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u8abf \u8b8a \u983b \u8b5c \u5e73 \u5747 \u8207 \u8b8a \u7570 \u6578 \u6b63 \u898f \u5316 \u6cd5 (Spectral Mean and Variance Normalization, SMVN)", "sec_num": "2.3" }, { "text": "\u5229\u7528\u975e\u7dda\u6027\u7684\u8f49\u63db(Nonlinear Transformation)\uff0c\u4e0d\u50c5\u5c07\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u4e4b\u5e73\u5747\u503c\u8207\u6a19\u6e96 \u5dee(\u6216\u8b8a\u7570\u6578)\u4f5c\u6b63\u898f\u5316\uff0c\u800c\u662f\u6574\u9ad4\u4e0a\u4f7f\u5f97\u8a13\u7df4\u8a9e\u53e5\u8207\u6e2c\u8a66\u8a9e\u53e5\u7684\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u8da8\u65bc \u64c1\u6709\u540c\u4e00\u500b\u6a5f\u7387\u5206\u5e03\u51fd\u6578\uff0c\u6b63\u898f\u5316\u5168\u90e8\u968e\u5c64\u7684\u52d5\u5dee (Sun et al., 2007) \uff1a", "cite_spans": [ { "start": 117, "end": 135, "text": "(Sun et al., 2007)", "ref_id": "BIBREF83" } ], "ref_spans": [], "eq_spans": [], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| |", "eq_num": "(4)" } ], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "\u5728\u5f0f 4 (Hadsell et al., 2006) \uff0c\u610f\u6307\u539f\u672c\u76f8\u9130\u7684\u8cc7\u6599\u5411\u91cf\u7d93\u904e\u964d\u7dad\u6216\u6295\u5f71\u5f8c\u4ecd\u7136\u7dad\u6301\u76f8\u9130\u8fd1\u3002\u8cc7\u6599\u5411\u91cf\u9593 \u7684\u9060\u8fd1\u95dc\u4fc2\uff0c\u6216\u5e7e\u4f55\u7d50\u69cb\u8cc7\u8a0a\u53ef\u4ee5\u7528\u3127\u6b0a\u91cd\u77e9\u9663 E \u8868\u793a\uff0c\u5176\u7dad\u5ea6\u662f\u7b49\u65bc\u8cc7\u6599\u5411\u91cf\u6578\u91cf\u6240 \u5f62\u6210\u7684\u65b9\u9663\u3002\u6700\u5f8c\u5c07\u6b0a\u91cd\u77e9\u9663 E \u7d0d\u5165\u6e1b\u640d\u51fd\u5f0f\u4e2d\uff0c\u505a\u70ba\u7de8\u78bc\u77e9\u9663\u7684\u6b63\u5247\u9805(Regularization ", "cite_spans": [ { "start": 5, "end": 27, "text": "(Hadsell et al., 2006)", "ref_id": "BIBREF71" } ], "ref_spans": [], "eq_spans": [], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D V||WH \u662f\u85c9\u7531\u6b50\u6c0f\u8ddd\u96e2\u6240\u63d0\u51fa\u7684\u6e1b\u640d\u51fd\u6578\u3002\u7576\u91cd\u5efa\u8a0a\u865f\u039b\u8207\u539f\u59cb\u4fe1\u865f V \u76f8\u7b49\u6642\uff0c\u5247 D V||WH 0\u3002\u53e6\u4e00\u500b\u6e1b\u640d\u51fd\u6578\u5247\u662f\u57fa\u65bc KL \u6563\u5ea6(Kullback-Leibler Divergence)\uff1a D V||WH V ln V WH V WH ,", "eq_num": "(7)" } ], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u7576\u539f\u59cb\u4fe1\u865f V", "eq_num": "(8)" } ], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Term)\u3002 \u4ee4 , \u2026 , \u70ba\u7de8\u78bc\u77e9\u9663H\u7684\u7b2c j \u884c\uff0c \u53ef\u88ab\u8996\u70ba\u662f\u7b2c \u500b\u8cc7\u6599\u5411\u91cf\u76f8\u5c0d\u65bc\u65b0 \u7684\u57fa\u5e95\u77e9\u9663W\u4e4b\u65b0\u8868\u793a\u3002\u5728\u6b64\u6211\u5011\u8a0e\u8ad6\u8f03\u5e38\u898b\u7684\u6b50\u5f0f\u8ddd\u96e2\uff1a ,", "eq_num": "(16)" } ], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u6b64\u8ddd\u96e2\u7528\u4f86\u6e2c\u91cf\u76f8\u5c0d\u65bc\u65b0\u7684\u57fa\u5e95\u77e9\u9663W\uff0c\u800c\u5169\u500b\u8cc7\u6599\u5411\u91cf \u8207 \u5728\u4f4e\u7dad\u5ea6\u7a7a\u9593\u4e2d\u8868\u793a\u4e4b \u9593\u7684\u5dee\u7570(Dissimilarity)\uff0c\u8ddd\u96e2\u51fd\u5f0f\u503c\u8d8a\u5927\u4ee3\u8868\u6b64\u5169\u500b\u8cc7\u6599\u5411\u91cf \u8207 \u5f7c\u6b64\u5dee\u7570\u8d8a\u5927\u3002 1 2 , E D E ,", "eq_num": "(17)" } ], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "\u672c \u8ad6 \u6587 \u5be6 \u9a57 \u6240 \u63a1 \u7528 \u7684 \u8a9e \u6599 \u5eab \u662f Aurora-2 \uff0c \u5b83 \u662f \u7531 \u6b50 \u6d32 \u96fb \u4fe1 \u6a19 \u6e96 \u5354 \u6703 (European Telecommunications Standards Institute, ETSI)\u6240\u767c\u884c\u7684\u8a9e\u6599\u5eab(Hirsch & Pearce, 2000)\uff0c\u4ee5 \u7f8e\u570b\u6210\u5e74\u4eba\u7684\u8072\u97f3\u4f5c\u70ba\u9304\u97f3\u4f86\u6e90\uff0c\u5167\u5bb9\u662f\u9023\u7e8c\u7684\u82f1\u6587\u6578\u5b57\u7531 0(Zero)\u5230 9(Nine)\u8ddf Oh \u7b49\u767c \u97f3\u5b57\u8a5e\u3002\u8a9e\u6599\u5eab\u5167\u6709\u4e7e\u6de8\u53ca\u542b\u6709\u96dc\u8a0a\u7684\u8a9e\u97f3\uff0c\u96dc\u8a0a\u4e2d\u6709\u516b\u7a2e\u4e0d\u540c\u7684\u52a0\u6210\u6027\u96dc\u8a0a\u8207\u5169\u7a2e\u4e0d \u540c\u7684\u901a\u9053\u6548\u61c9\uff0c\u800c\u901a\u9053\u6548\u61c9\u662f\u4f7f\u7528\u570b\u969b\u96fb\u4fe1\u806f\u5408\u6703(ITU)\u6a19\u6e96\u4e2d\u7684 G.712 \u548c MIRS\u3002\u6839\u64da \u4e0d\u540c\u7684\u96dc\u8a0a\u5e72\u64fe\uff0c\u5206\u6210\u4e09\u500b\u6e2c\u8a66\u96c6\uff1aSet A\u3001Set B \u53ca Set C\u3002Set A \u7684\u8a9e\u97f3\u5206\u5225\u542b\u6709\u5730\u4e0b \u9435(Subway)\u3001\u4eba\u8072(Babble)\u3001\u6c7d\u8eca(Car)\u548c\u5c55\u89bd\u6703\u9928(Exhibition)\u7b49\u56db\u7a2e\u52a0\u6210\u6027\u96dc\u8a0a\u8207 G.712 \u901a\u9053\u6548\u61c9\uff1bSet B \u7684\u8a9e\u97f3\u5247\u5206\u5225\u542b\u6709\u9910\u5ef3(Restaurant)\u3001\u8857\u9053(Street)\u3001\u6a5f\u5834(Airport)\u548c\u706b\u8eca \u7ad9(Train Station)\u7b49\u56db\u7a2e\u52a0\u6210\u6027\u96dc\u8a0a\u8207 G.712 \u7684\u901a\u9053\u6548\u61c9\uff1bSet C \u5206\u5225\u52a0\u5165\u4e86\u5730\u4e0b\u9435 (Subway) \u8207\u8857\u9053(Street)\u5169\u7a2e\u96dc\u8a0a\u8207 MIRS \u901a\u9053\u6548\u61c9\u3002\u800c\u5176\u4e2d\u7684\u8a0a\u566a\u6bd4(SNR)\u5247\u6709\u4e03\u7a2e\uff0c \u70ba Clean\u300120dB\u300115dB\u300110dB\u30015dB\u30010dB \u548c-5dB\uff0c\u4e26\u4e14\u63d0\u4f9b\u4e8c\u7a2e\u8a13\u7df4\u6a21\u5f0f\uff1a\u4e7e\u6de8\u60c5\u5883 \u8a13\u7df4\u6a21\u5f0f(Clean-Condition Training)\u8207\u8907\u5408\u60c5\u5883\u8a13\u7df4\u6a21\u5f0f(Multi-Condition Training)\u3002\u672c\u7814 \u7a76\u7684\u57fa\u790e\u5be6\u9a57\u7686\u4f7f\u7528\u4e7e\u6de8\u60c5\u5883\u8a13\u7df4\u6a21\u5f0f\uff0c\u6545\u5728\u8072\u5b78\u6a21\u578b\u8a13\u7df4\u6642\u4e26\u6c92\u6709\u4f7f\u7528\u5230\u4efb\u4f55\u52a0\u6210\u6027 \u96dc\u8a0a\u7684\u8cc7\u8a0a\u6216\u5167\u6db5\u3002 4.2 \u5be6\u9a57\u8a2d\u5b9a \u5728 \u672c \u8ad6 \u6587 \u4e2d \u7684 \u57fa \u790e \u5be6 \u9a57 \u662f \u63a1 \u7528 \u6885 \u723e \u5012 \u983b \u8b5c \u4fc2 \u6578 (Mel-frequency Cepstral", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "4.7 \u4e09\u7a2e\u975e\u8ca0\u77e9\u9663\u5206\u89e3\u6cd5\u6539\u9032\u65b9\u6cd5\u4e4b\u7d50\u5408 \u63a5 \u8457 \u6211 \u5011 \u7d50 \u5408 \u975e \u5e73 \u6ed1 \u975e \u8ca0 \u77e9 \u9663 \u5206 \u89e3 \u6cd5 (NSNMF) \u4ee5 \u53ca \u57fa \u65bc \u5716 \u6b63 \u5247 \u5316 \u975e \u8ca0 \u77e9 \u9663 \u5206 \u89e3 \u6cd5", "eq_num": "(" } ], "section": "\u8abf\u8b8a\u983b\u8b5c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(Spectral Histogram Equalization, SHE)", "sec_num": "2.4" }, { "text": "Traditional way of conducting analyses of human behaviors is through manual observation. For example in couple therapy studies, human raters observe sessions of interaction between distressed couples and manually annotate the behaviors of each spouse using established coding manuals. Clinicians then analyze these annotated behaviors to understand the effectiveness of treatment that each couple receives. However, this manual observation approach is very time consuming, and the subjective nature of the annotation process can result in unreliable annotation. Our work aims at using machine learning approach to automate this process, and by using signal processing technique, we can bring in quantitative evidence of human behavior. Deep learning is the current state-of-art machine learning technique. This paper proposes to use stacked sparse autoencoder (SSAE) to reduce the dimensionality of the acoustic-prosodic features used in order to identify the key higher-level features. Finally, we use logistic regression (LR) to perform classification on recognition of high and low rating of six different codes. The method achieves an overall accuracy of 75% over 6 codes (husband's average accuracy of 74.9%, wife's average accuracy of 75%), compared to the previously-published study of 74.1% (husband's average accuracy of 75%, wife's average accuracy of 73.2%) (Black et al., 2013) , a total improvement of 0.9%. Our proposed method achieves a higher classification rate by using much fewer number of features (10 times less than the previous work (Black et al., 2013) (O'Brian et al., 1994) \u3002\u4eba\u70ba\u884c\u70ba\u89c0\u5bdf\u76f8\u7576\u7684\u6210\u529f\u7814\u7a76\u5728\u89aa\u5bc6\u95dc\u4fc2 (Karney & Bradbury, 1995) (Gonzaga et al., 2007) \uff0c\u5373\u592b\u59bb\u7684\u884c\u70ba\u662f\u5f71\u97ff\u89aa\u5bc6\u95dc\u4fc2\u7a0b\u5ea6\u7684\u56e0\u7d20\u4e4b\u4e00\u3002\u7136\u800c\u7528\u65bc \u4eba\u70ba\u89c0\u5bdf\u884c\u70ba\u7684\u65b9\u5f0f\u5b58\u5728\u4e00\u4e9b\u56f0\u96e3\uff0c\u4e00\u65b9\u9762\u592a\u6d88\u8017\u6642\u9593\uff0c\u53e6\u4e00\u9762\u4e5f\u6d6a\u8cbb\u6210\u672c\u3002 \u5982\u679c\u80fd\u900f\u904e\u96fb\u8166\u5de5\u7a0b\u7684\u65b9\u5f0f\u4f86\u53d6\u4ee3\u4eba\u70ba\u89c0\u5bdf\u5c07\u5927\u5927\u63d0\u5347\u6548\u7387\uff0c\u900f\u904e\u4f4e\u5c64\u63cf\u8ff0\u6620\u5c04\u9ad8 \u5c64\u63cf\u8ff0\u4f86\u9810\u6e2c\u4eba\u985e\u884c\u70ba (Schuller et al., 2007) \uff0c\u9019\u9805\u7814\u7a76\u9818\u57df\u662f\u6b63\u5728\u4e0d\u65b7\u767c\u5c55\u7684\u4e00\u90e8\u5206\u3002 \u4eba\u985e\u884c\u70ba\u4fe1\u865f\u8655\u7406(Behavioral Signal Processing, BSP)\u76ee\u7684\u5728\u5e6b\u52a9\u9023\u63a5\u4fe1\u865f\u79d1\u5b78\u548c\u884c\u70ba\u8655 \u7406\u7684\u65b9\u6cd5\uff0c\u5efa\u7acb\u5728\u50b3\u7d71\u7684\u4fe1\u865f\u8655\u7406\u7814\u7a76\uff0c\u5982\u8a9e\u97f3\u8b58\u5225\uff0c\u9762\u624b\u90e8\u8ffd\u8e64\u7b49\u7b49\u3002\u76f8\u95dc\u986f\u8457 BSP \u7814\u7a76\u5df2\u767c\u7522\u65bc\u4ee5\u4eba\u70ba\u4e2d\u5fc3\u7684\u63d0\u53d6\u97f3\u983b\uff0c\u8996\u983b\u4fe1\u865f\uff0c\u4f86\u5206\u6790\u5be6\u969b\u4e0a\u4eba\u985e\u884c\u70ba\u6216\u662f\u60c5\u611f\u65b9\u9762 (Burkhardt et al., 2009; Devillers & Campbell, 2011) ", "cite_spans": [ { "start": 1369, "end": 1389, "text": "(Black et al., 2013)", "ref_id": "BIBREF88" }, { "start": 1556, "end": 1576, "text": "(Black et al., 2013)", "ref_id": "BIBREF88" }, { "start": 1577, "end": 1599, "text": "(O'Brian et al., 1994)", "ref_id": "BIBREF99" }, { "start": 1620, "end": 1645, "text": "(Karney & Bradbury, 1995)", "ref_id": "BIBREF98" }, { "start": 1646, "end": 1668, "text": "(Gonzaga et al., 2007)", "ref_id": "BIBREF93" }, { "start": 1779, "end": 1802, "text": "(Schuller et al., 2007)", "ref_id": "BIBREF103" }, { "start": 1962, "end": 1986, "text": "(Burkhardt et al., 2009;", "ref_id": "BIBREF89" }, { "start": 1987, "end": 2014, "text": "Devillers & Campbell, 2011)", "ref_id": "BIBREF92" } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "\u5f0f 3 The Author Index contains the primary entry for each item, listed under the first author's name. The primary entry includes the coauthors' names, the title of paper or other item, and its location, specified by the publication volume, number, and inclusive pages. The Subject Index contains entries describing the item under all appropriate subject headings, plus the first author's name, the publication volume, number, and inclusive pages. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "The Error Analysis of \"Le\"ased on \"Chinese Learner Written Corpus\"; Tung, T.-Y., 20 1 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Teaching", "sec_num": null }, { "text": "The subscript \"p\" in \"n1 p \" indicates that \"n1 p \" is a pseudo nonterminal derived from the nonterminal \"n1\", which has four terminals \"2361\", \"\u679d\", \"\u7d05\" and \"\u7b46\". More details about pseudo nonterminal will be given at Section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "However, the \"Algebra\" solution type in this case is useless to LFC because the body text has already mentioned how to solve it, and the LFC actually does not need STC to tell it how to solve the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Prefixes \"IE-\" and \"LFC-\" denote that those operators are issued by IE and LFC, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://trec.nist.gov/data/tweets/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ettoday.net/news/news-list.htm 4 https://github.com/fxsjy/jieba", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Prof. Wen-Lian Hsu for suggesting this research topic and making the original elementary school math corpus available to us, and Prof. Keh-Jiann Chen for providing the resources and supporting this project. Besides, our thanks should be extended to Dr. Yu-Ming Hsieh and Dr. Ming-Hong Bai for implementing the syntactic parser and the semantic composer, respectively. Also, we would like to thank Prof. Chin-Hui Lee for suggesting the solution type. Last, our thanks should also go to Ms. Su-Chu Lin for manually annotating the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null }, { "text": "Computational Linguistics and Chinese Language Processing Vol. 20, No. 2, December 2015, pp. 65- Walker, 1994; Robertson et al., 1996) Ducharme, R., Vincent, P., & Jauvin, C. (2003) ", "cite_spans": [ { "start": 14, "end": 96, "text": "Linguistics and Chinese Language Processing Vol. 20, No. 2, December 2015, pp. 65-", "ref_id": null }, { "start": 111, "end": 134, "text": "Robertson et al., 1996)", "ref_id": "BIBREF61" }, { "start": 135, "end": 181, "text": "Ducharme, R., Vincent, P., & Jauvin, C. (2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "Please send application to:The ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To Register\uff1a", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Interactive generation and knowledge administration in MultiM\u00e9t\u00e9o", "authors": [ { "first": "J", "middle": [ "F" ], "last": "Allen", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Ninth International Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "300--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen, J. F. (2014). Learning a Lexicon for Broad-Coverage Semantic Parsing. In the References Coch, J. (1998). Interactive generation and knowledge administration in MultiM\u00e9t\u00e9o. In Proceedings of the Ninth International Workshop on Natural Language Generation, 300-303.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Integrating natural language generation and hypertext to produce dynamic documents", "authors": [ { "first": "R", "middle": [], "last": "Dale", "suffix": "" }, { "first": "J", "middle": [], "last": "Oberlander", "suffix": "" }, { "first": "M", "middle": [], "last": "Milosavljevic", "suffix": "" }, { "first": "A", "middle": [], "last": "Knott", "suffix": "" } ], "year": 1998, "venue": "Interacting with Computers", "volume": "11", "issue": "2", "pages": "109--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dale, R., Oberlander, J., Milosavljevic, M., & Knott, A. (1998). Integrating natural language generation and hypertext to produce dynamic documents. Interacting with Computers, 11(2), 109-135.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Using natural-language processing to produce weather forecasts", "authors": [ { "first": "E", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "N", "middle": [], "last": "Driedger", "suffix": "" }, { "first": "R", "middle": [], "last": "Kittredge", "suffix": "" } ], "year": 1994, "venue": "IEEE Expert: Intelligent Systems and Their Applications", "volume": "9", "issue": "", "pages": "45--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goldberg, E., Driedger, N., & Kittredge, R. (1994). Using natural-language processing to produce weather forecasts. IEEE Expert: Intelligent Systems and Their Applications, 9(2), 45-53.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An Introduction to Functional Grammar", "authors": [ { "first": "M", "middle": [ "A K" ], "last": "Halliday", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Halliday, M. A. K. (1985). An Introduction to Functional Grammar. London, England: Edward Arnold.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multilingual document production: From support for translating to support for authoring", "authors": [ { "first": "A", "middle": [], "last": "Hartley", "suffix": "" }, { "first": "C", "middle": [], "last": "Paris", "suffix": "" } ], "year": 1997, "venue": "Machine Translation", "volume": "12", "issue": "1", "pages": "109--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hartley, A., & Paris, C. (1997). Multilingual document production: From support for translating to support for authoring. Machine Translation, 12(1), 109-128.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Speech and Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jurafsky, D., & Martin, J. H. (2000). Speech and Language Processing. New Jersey: Prentice Hall.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Functional Grammar", "authors": [ { "first": "M", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1979, "venue": "BLS-79", "volume": "", "issue": "", "pages": "142--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kay, M. (1979). Functional Grammar. In BLS-79, Berkeley, CA, 142-158.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The practical value of n-grams in generation", "authors": [ { "first": "I", "middle": [], "last": "Langkilde", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Ninth International Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "248--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Langkilde, I., & Knight, K. (1998). The practical value of n-grams in generation. In Proceedings of the Ninth International Workshop on Natural Language Generation, Niagara-on-the-Lake, Ontario, Canada, 248-255.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation", "authors": [ { "first": "Y", "middle": [ "C" ], "last": "Lin", "suffix": "" }, { "first": "C", "middle": [ "C" ], "last": "Liang", "suffix": "" }, { "first": "K", "middle": [ "Y" ], "last": "Hsu", "suffix": "" }, { "first": "C", "middle": [ "T" ], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [ "Y" ], "last": "Miao", "suffix": "" }, { "first": "W", "middle": [ "Y" ], "last": "Ma", "suffix": "" }, { "first": "L", "middle": [ "W" ], "last": "Ku", "suffix": "" }, { "first": "C", "middle": [ "J" ], "last": "Liau", "suffix": "" }, { "first": "K", "middle": [ "Y" ], "last": "Su", "suffix": "" } ], "year": 2015, "venue": "International Journal of Computational Linguistics and Chinese Language Processing (IJCLCLP)", "volume": "20", "issue": "2", "pages": "1--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Y. C., Liang, C. C., Hsu, K. Y., Huang, C. T., Miao, S. Y., Ma, W. Y., Ku, L. W., Liau, C. J., & Su, K. Y. (2015). Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation. International Journal of Computational Linguistics and Chinese Language Processing (IJCLCLP), 20(2), 1-26.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Big Data -A Revolution That Will Transform How We Live, Work, and Think", "authors": [ { "first": "V", "middle": [], "last": "Mayer-Sch\u00f6nberger", "suffix": "" }, { "first": "K", "middle": [], "last": "Cukier", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mayer-Sch\u00f6nberger, V., & Cukier, K. (2013). Big Data -A Revolution That Will Transform How We Live, Work, and Think. Houghton Mifflin Harcourt Publishing Company.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Content selection in comparison generation", "authors": [ { "first": "M", "middle": [], "last": "Milosavljevic", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 6th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "72--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milosavljevic, M. (1997). Content selection in comparison generation. In Proceedings of the 6th European Workshop on Natural Language Generation, Duisburg, Germany, 72-81.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A review of methods for automatic understanding of natural language mathematical problems", "authors": [ { "first": "A", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "U", "middle": [], "last": "Garain", "suffix": "" } ], "year": 2008, "venue": "Artif Intell Rev", "volume": "29", "issue": "2", "pages": "93--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mukherjee, A., & Garain, U. (2008). A review of methods for automatic understanding of natural language mathematical problems. Artif Intell Rev, 29(2), 93-122.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "DRAFTER: An interactive support tool for writing multilingual instructions", "authors": [ { "first": "C", "middle": [], "last": "Paris", "suffix": "" }, { "first": "K", "middle": [], "last": "Vander Linden", "suffix": "" } ], "year": 1996, "venue": "IEEE Computer", "volume": "29", "issue": "7", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paris, C., & Vander Linden, K. (1996). DRAFTER: An interactive support tool for writing multilingual instructions. IEEE Computer, 29(7), 49-56.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic document creation from software specifications", "authors": [ { "first": "C", "middle": [], "last": "Paris", "suffix": "" }, { "first": "K", "middle": [], "last": "Vander Linden", "suffix": "" }, { "first": "S", "middle": [], "last": "Lu", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 3rd Australian Document Computing Symposium (ADCS-98)", "volume": "", "issue": "", "pages": "26--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paris, C., Vander Linden, K., & Lu, S. (1998). Automatic document creation from software specifications. In Proceedings of the 3rd Australian Document Computing Symposium (ADCS-98), 26-31.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Types of knowledge required to personalise smoking cessation letters", "authors": [ { "first": "E", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "R", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "L", "middle": [], "last": "Osman", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Joint European Conference on Artificial Intelligence in Medicine and Medical Decision Making", "volume": "", "issue": "", "pages": "389--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reiter, E., Robertson, R., & Osman, L. (1999). Types of knowledge required to personalise smoking cessation letters. In Proceedings of the Joint European Conference on Artificial Intelligence in Medicine and Medical Decision Making. Springer-Verlag, 389-399.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Artificial Intelligence : A Modern Approach", "authors": [ { "first": "S", "middle": [ "J" ], "last": "Russell", "suffix": "" }, { "first": "P", "middle": [], "last": "Norvig", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Russell, S. J. & Norvig, P. (2009). Artificial Intelligence : A Modern Approach(3rd Edition), Prentice Hall.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The DARPA Machine Reading Program -Encouraging Linguistic and Reasoning Research with a Series of Reading Tasks", "authors": [ { "first": "S", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "D", "middle": [], "last": "Adams", "suffix": "" }, { "first": "H", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "J", "middle": [], "last": "Herr", "suffix": "" }, { "first": "R", "middle": [], "last": "Keesing", "suffix": "" }, { "first": "D", "middle": [], "last": "Oblinger", "suffix": "" }, { "first": "H", "middle": [], "last": "Simpson", "suffix": "" }, { "first": "R", "middle": [], "last": "Schrag", "suffix": "" }, { "first": "J", "middle": [], "last": "Wright", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strassel, S., Adams, D., Goldberg, H., Herr, J., Keesing, R., Oblinger, D., Simpson, H., Schrag, R., & Wright, J. (2010). The DARPA Machine Reading Program -Encouraging Linguistic and Reasoning Research with a Series of Reading Tasks. LREC 2010.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Plan based Integration of Natural Language and Graphics Generation", "authors": [ { "first": "W", "middle": [], "last": "Wahlster", "suffix": "" }, { "first": "E", "middle": [], "last": "Andr\u00e9", "suffix": "" }, { "first": "W", "middle": [], "last": "Finkler", "suffix": "" }, { "first": "H.-J", "middle": [], "last": "Profitlich", "suffix": "" }, { "first": "T", "middle": [], "last": "Rist", "suffix": "" } ], "year": 1993, "venue": "Artificial Intelligence", "volume": "63", "issue": "", "pages": "387--428", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wahlster, W., Andr\u00e9, E., Finkler, W., Profitlich, H.-J., & Rist, T. (1993). Plan based Integration of Natural Language and Graphics Generation. Artificial Intelligence, 63(1993) 387-428.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Connectionist Language Generator", "authors": [ { "first": "N", "middle": [], "last": "Ward", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ward, N. (1994). A Connectionist Language Generator. New Jersey: Ablex Publishing Corporation.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Probabilistic latent semantic analysis", "authors": [ { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence", "volume": "", "issue": "", "pages": "289--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofmann, T. (1999). Probabilistic latent semantic analysis. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, 289-296.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Latent dirichlet allocation. the", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of machine Learning research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. the Journal of machine Learning research, 3, 993-1022.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Survey on Topic Modeling", "authors": [ { "first": "M", "middle": [], "last": "Divya", "suffix": "" }, { "first": "K", "middle": [], "last": "Thendral", "suffix": "" }, { "first": "S", "middle": [], "last": "Chitrakala", "suffix": "" } ], "year": 2013, "venue": "International Journal of Recent Advances in Engineering & Technology (IJRAET)", "volume": "1", "issue": "", "pages": "57--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Divya, M., Thendral, K., & Chitrakala, S. (2013). A Survey on Topic Modeling. International Journal of Recent Advances in Engineering & Technology (IJRAET), 1, 57-61.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Optimizing semantic coherence in topic models", "authors": [ { "first": "D", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "E", "middle": [], "last": "Talley", "suffix": "" }, { "first": "M", "middle": [], "last": "Leenders", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "262--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mimno, D., Wallach, H. M., Talley, E., Leenders, M., & McCallum, A. (2011). Optimizing semantic coherence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 262-272.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A biterm topic model for short texts", "authors": [ { "first": "X", "middle": [], "last": "Yan", "suffix": "" }, { "first": "J", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lan", "suffix": "" }, { "first": "X", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd international conference on World Wide Web", "volume": "", "issue": "", "pages": "1445--1456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan, X., Guo, J., Lan, Y., & Cheng, X. (2013). A biterm topic model for short texts. In Proceedings of the 22nd international conference on World Wide Web, Rio de Janeiro, Brazil, 1445-1456.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Text classification from labeled and unlabeled documents using EM", "authors": [ { "first": "K", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Mccallum", "suffix": "" }, { "first": "S", "middle": [], "last": "Thrun", "suffix": "" }, { "first": "T", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2000, "venue": "Machine learning", "volume": "39", "issue": "2", "pages": "103--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nigam, K., McCallum, A. K., Thrun, S., & Mitchell, T. (2000). Text classification from labeled and unlabeled documents using EM. Machine learning, 39(2), 103-134.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Comparing twitter and traditional media using topic models", "authors": [ { "first": "W", "middle": [ "X" ], "last": "Zhao", "suffix": "" }, { "first": "J", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "J", "middle": [], "last": "Weng", "suffix": "" }, { "first": "J", "middle": [], "last": "He", "suffix": "" }, { "first": "E.-P", "middle": [], "last": "Lim", "suffix": "" }, { "first": "H", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2011, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "338--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, W. X., Jiang, J., Weng, J., He, J., Lim, E.-P., Yan, H., et al. (2011). Comparing twitter and traditional media using topic models. In Advances in Information Retrieval. ed: Springer, 338-349.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "BTM: Topic Modeling over Short Texts. Knowledge and Data Engineering", "authors": [ { "first": "X", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "X", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lan", "suffix": "" }, { "first": "J", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2014, "venue": "IEEE Transactions on", "volume": "26", "issue": "12", "pages": "2928--2941", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheng, X., Yan, X., Lan, Y., & Guo, J. (2014). BTM: Topic Modeling over Short Texts. Knowledge and Data Engineering, IEEE Transactions on, 26(12), 2928-2941.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Word association norms, mutual information, and lexicography. Computational linguistics", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "", "volume": "16", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K. W., & Hanks, P. (1990). Word association norms, mutual information, and lexicography. Computational linguistics, 16(1), 22-29.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Rethinking LDA: Why priors matter", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "D", "middle": [], "last": "Minmo", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "Advances in Neural Information Processing Systems", "volume": "22", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wallach, H. M., Minmo, D., & McCallum, A. (2009). Rethinking LDA: Why priors matter. In Advances in Neural Information Processing Systems 22 (NIPS 2009).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "An introduction to latent semantic analysis", "authors": [ { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Foltz", "suffix": "" }, { "first": "D", "middle": [], "last": "Laham", "suffix": "" } ], "year": 1998, "venue": "Discourse processes", "volume": "25", "issue": "", "pages": "259--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to latent semantic analysis. Discourse processes, 25(2&3), 259-284.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Topic modeling: beyond bag-of-words", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Wallach", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "977--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wallach, H. M. (2006). Topic modeling: beyond bag-of-words. In Proceedings of the 23rd international conference on Machine learning, 977-984.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Topical n-grams: Phrase and topic discovery, with an application to information retrieval", "authors": [ { "first": "X", "middle": [], "last": "Wang", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "X", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2007, "venue": "Seventh IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "697--702", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, X., McCallum, A., & Wei, X. (2007). Topical n-grams: Phrase and topic discovery, with an application to information retrieval. In Seventh IEEE International Conference on Data Mining (ICDM 2007), 697-702.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Integrating topics and syntax", "authors": [ { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "J", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" } ], "year": 2004, "venue": "Advances in neural information processing systems", "volume": "17", "issue": "", "pages": "537--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Griffiths, T. L., Steyvers, M., Blei, D. M., & Tenenbaum, J. B. (2004). Integrating topics and syntax. In Advances in neural information processing systems 17, 537-544.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Pachinko allocation: DAG-structured mixture models of topic correlations", "authors": [ { "first": "W", "middle": [], "last": "Li", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "577--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, W. & McCallum, A. (2006). Pachinko allocation: DAG-structured mixture models of topic correlations. In Proceedings of the 23rd international conference on Machine learning, 577-584.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Mining topics in documents: standing on the shoulders of big data", "authors": [ { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "B", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1116--1125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Z. & Liu, B. (2014). Mining topics in documents: standing on the shoulders of big data. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, New York, New York, USA, 1116-1125.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "PET: a statistical model for popular events tracking in social communities", "authors": [ { "first": "C", "middle": [ "X" ], "last": "Lin", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Q", "middle": [], "last": "Mei", "suffix": "" }, { "first": "J", "middle": [], "last": "Han", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "929--938", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, C. X., Zhao, B., Mei, Q., & Han, J. (2010). PET: a statistical model for popular events tracking in social communities. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 929-938.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Characterizing Microblogs with Topic Models", "authors": [ { "first": "D", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Liebling", "suffix": "" } ], "year": 2010, "venue": "Fourth International AAAI Conference on Weblogs and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramage, D., Dumais, S. T., & Liebling, D. J. (2010). Characterizing Microblogs with Topic Models. In Fourth International AAAI Conference on Weblogs and Social Media.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Short and tweet: experiments on recommending content from information streams", "authors": [ { "first": "J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "R", "middle": [], "last": "Nairn", "suffix": "" }, { "first": "L", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "M", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "E", "middle": [], "last": "Chi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "1185--1194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, J., Nairn, R., Nelson, L., Bernstein, M., & Chi, E. (2010). Short and tweet: experiments on recommending content from information streams. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1185-1194.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Using twitter to recommend real-time topical news", "authors": [ { "first": "O", "middle": [], "last": "Phelan", "suffix": "" }, { "first": "K", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "B", "middle": [], "last": "Smyth", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the third ACM conference on Recommender systems", "volume": "", "issue": "", "pages": "385--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phelan, O., McCarthy, K., & Smyth, B. (2009). Using twitter to recommend real-time topical news. In Proceedings of the third ACM conference on Recommender systems, 385-388.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Learning to classify short and sparse text & web with hidden topics from large-scale data collections", "authors": [ { "first": "X.-H", "middle": [], "last": "Phan", "suffix": "" }, { "first": "L.-M", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "S", "middle": [], "last": "Horiguchi", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 17th international conference on World Wide Web", "volume": "", "issue": "", "pages": "91--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phan, X.-H., Nguyen, L.-M., & Horiguchi, S. (2008). Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In Proceedings of the 17th international conference on World Wide Web, 91-100.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "The author-topic model for authors and documents", "authors": [ { "first": "M", "middle": [], "last": "Rosen-Zvi", "suffix": "" }, { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "P", "middle": [], "last": "Smyth", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th conference on Uncertainty in artificial intelligence", "volume": "", "issue": "", "pages": "487--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosen-Zvi, M., Griffiths, T., Steyvers, M., & Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, 487-494.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Transferring topical knowledge from auxiliary long texts for short text clustering", "authors": [ { "first": "O", "middle": [], "last": "Jin", "suffix": "" }, { "first": "N", "middle": [ "N" ], "last": "Liu", "suffix": "" }, { "first": "K", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "775--784", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin, O., Liu, N. N., Zhao, K., Yu, Y., & Yang, Q. (2011). Transferring topical knowledge from auxiliary long texts for short text clustering. In Proceedings of the 20th ACM international conference on Information and knowledge management, 775-784.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Empirical study of topic modeling in twitter", "authors": [ { "first": "L", "middle": [], "last": "Hong", "suffix": "" }, { "first": "B", "middle": [ "D" ], "last": "Davison", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the First Workshop on Social Media Analytics", "volume": "", "issue": "", "pages": "80--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hong, L. & Davison, B. D. (2010). Empirical study of topic modeling in twitter. In Proceedings of the First Workshop on Social Media Analytics, 80-88.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Liu", "suffix": "" } ], "year": 1994, "venue": "Journal of the American Statistical Association", "volume": "89", "issue": "427", "pages": "958--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, J. S. (1994). The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem. Journal of the American Statistical Association, 89(427), 958-966.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Objective criteria for the evaluation of clustering methods", "authors": [ { "first": "W", "middle": [ "M" ], "last": "Rand", "suffix": "" } ], "year": 1971, "venue": "Journal of the American Statistical association", "volume": "66", "issue": "336", "pages": "846--850", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rand, W. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336), 846-850.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora", "authors": [ { "first": "D", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "D", "middle": [], "last": "Hall", "suffix": "" }, { "first": "R", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "248--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramage, D., Hall, D., Nallapati, R., & Manning, C. D. (2009). Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, 1, 248-256.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le, Q. V., & Mikolov, T. (2014). Distributed representations of sentences and documents. In Proceedings of the International Conference on Machine Learning.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "ROUGE: a package for automatic evaluation of summaries", "authors": [ { "first": "C", "middle": [ "Y" ], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop on Text Summarization Branches Out", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, C. Y. (2004). ROUGE: a package for automatic evaluation of summaries. In Proceedings of the Workshop on Text Summarization Branches Out.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "The automatic creation of literature abstracts", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Luhn", "suffix": "" } ], "year": 1958, "venue": "IBM Journal of Research and Development", "volume": "2", "issue": "2", "pages": "159--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luhn, H. P. (1958). The automatic creation of literature abstracts. IBM Journal of Research and Development, 2(2), 159-165.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013a). Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Learning Representations, 1-12.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Proceedings of the International Conference on Learning Representations, 1-9.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Contextual correlates of semantic similarity", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" }, { "first": "W", "middle": [], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "Language and Cognitive Processes", "volume": "6", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G., & Charles, W. (1991). Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1), 1-28.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Automatic text summarization by paragraph extraction", "authors": [ { "first": "M", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "A", "middle": [], "last": "Singhal", "suffix": "" }, { "first": "C", "middle": [], "last": "Buckley", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ACL/EACL Workshop on Intelligent Scalable Text Summarization", "volume": "", "issue": "", "pages": "39--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitra, M., Singhal, A., & Buckley, C. (1997). Automatic text summarization by paragraph extraction. In Proceedings of the ACL/EACL Workshop on Intelligent Scalable Text Summarization, 39-46.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Learning word embeddings efficiently with noise-contrastive estimation", "authors": [ { "first": "A", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuoglu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2265--2273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mnih, A., & Kavukcuoglu, K. (2013). Learning word embeddings efficiently with noise-contrastive estimation. In Proceedings of the Annual Conference on Neural Information Processing Systems, 2265-2273.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Hierarchical probabilistic neural network language model", "authors": [ { "first": "F", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "246--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morin, F., & Bengio, Y. (2005). Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, 246-252.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "A language modeling approach to information retrieval", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Ponte", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "275--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ponte, J. M., & Croft, W. B. (1998). A language modeling approach to information retrieval. In Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval, 275-281.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Learning word representation considering proximity and ambiguity", "authors": [ { "first": "L", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Z", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Y", "middle": [], "last": "Rui", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1572--1578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiu, L., Cao,Y., Nie, Z., & Rui, Y. (2014). Learning word representation considering proximity and ambiguity. In Proceedings of the AAAI Conference on Artificial Intelligence, 1572-1578.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Relevance weighting of search terms", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "K", "middle": [ "S" ], "last": "Jones", "suffix": "" } ], "year": 1976, "venue": "Journal of the American Society for Information Science", "volume": "27", "issue": "3", "pages": "129--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robertson, S. E., & Jones, K. S. (1976). Relevance weighting of search terms. Journal of the American Society for Information Science, 27(3), 129-146.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Okapi at TREC-4", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "S", "middle": [], "last": "Walker", "suffix": "" }, { "first": "K", "middle": [ "S" ], "last": "Jones", "suffix": "" }, { "first": "M", "middle": [], "last": "Hancock-Beaulieu", "suffix": "" }, { "first": "M", "middle": [], "last": "Gatford", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Fourth Text Retrieval Conference", "volume": "", "issue": "", "pages": "73--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robertson, S. E., Walker, S., Jones, K. S., Hancock-Beaulieu, M., & Gatford, M. (1996). Okapi at TREC-4. In Proceedings of the Fourth Text Retrieval Conference, 73-97.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "S", "middle": [], "last": "Walker", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "232--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robertson, S. E., & Walker, S. (1994). Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval, 232-241.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Graph-of-word and TW-IDF: New approach to Ad hoc IR", "authors": [ { "first": "F", "middle": [], "last": "Rousseau", "suffix": "" }, { "first": "M", "middle": [], "last": "Vazirgiannis", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Conference on Information, Knowledge Management", "volume": "85", "issue": "", "pages": "59--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rousseau, F., & Vazirgiannis, M. (2013). Graph-of-word and TW-IDF: New approach to Ad hoc IR. In Proceedings of the International Conference on Conference on Information, Knowledge Management, 59-68. \u7bc0\u9304\u5f0f\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4f7f\u7528\u8868\u793a\u6cd5\u5b78\u7fd2\u6280\u8853 85", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Computer evaluation of indexing and text processing", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "M", "middle": [ "E" ], "last": "Lesk", "suffix": "" } ], "year": 1968, "venue": "Journal of the ACM", "volume": "15", "issue": "1", "pages": "8--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton, G., & Lesk, M. E. (1968). Computer evaluation of indexing and text processing. Journal of the ACM, 15(1), 8-36.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Multi-document summarization using cluster-based link analysis", "authors": [ { "first": "X", "middle": [], "last": "Wan", "suffix": "" }, { "first": "J", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "299--306", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wan, X., & Yang, J. (2008). Multi-document summarization using cluster-based link analysis. In Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval, 299-306.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "MATBN: a Mandarin Chinese broadcast news corpus", "authors": [ { "first": "H.-M", "middle": [], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "S.-S", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2005, "venue": "Journal of Computational Linguistics and Chinese Language Processing", "volume": "10", "issue": "2", "pages": "219--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, H.-M., Chen, B., Kuo, J.-W., & Cheng, S.-S. (2005). MATBN: a Mandarin Chinese broadcast news corpus. Journal of Computational Linguistics and Chinese Language Processing, 10(2), 219-236.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Graph regularized nonnegative matrix factorization for data representation", "authors": [ { "first": "D", "middle": [], "last": "Cai", "suffix": "" }, { "first": "X", "middle": [], "last": "He", "suffix": "" }, { "first": "J", "middle": [], "last": "Han", "suffix": "" }, { "first": "T", "middle": [ "S" ], "last": "Huang", "suffix": "" } ], "year": 2011, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "33", "issue": "8", "pages": "1548--1560", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cai, D., He, X., Han, J., & Huang, T. S. (2011). Graph regularized nonnegative matrix factorization for data representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1548-1560.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Modulation spectrum factorization for robust speech recognition", "authors": [ { "first": "W.-Y", "middle": [], "last": "Chu", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Hung", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the APSIPA Annual Summit and Conference", "volume": "", "issue": "", "pages": "18--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chu, W.-Y., Hung, J.-W., & Chen, B. (2011). Modulation spectrum factorization for robust speech recognition. In Proceedings of the APSIPA Annual Summit and Conference, 18-21.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Cepstral analysis techniques for automatic speaker verification", "authors": [ { "first": "S", "middle": [], "last": "Furui", "suffix": "" } ], "year": 1981, "venue": "IEEE Transactions on Acoustic, Speech and Signal Processing", "volume": "29", "issue": "2", "pages": "254--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Furui, S. (1981). Cepstral analysis techniques for automatic speaker verification. IEEE Transactions on Acoustic, Speech and Signal Processing, 29(2), 254-272.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "On the origins of speech intelligibility in the real world", "authors": [ { "first": "S", "middle": [], "last": "Greenberg", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ESCA-NATO Tutorial and Research Workshop on Robust Speech Recognition for Unknown Communication Channels", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greenberg, S. (1997). On the origins of speech intelligibility in the real world. In Proceedings of the ESCA-NATO Tutorial and Research Workshop on Robust Speech Recognition for Unknown Communication Channels.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Dimensionality reduction by learning an invariant mapping", "authors": [ { "first": "R", "middle": [], "last": "Hadsell", "suffix": "" }, { "first": "S", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "1735--1742", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hadsell, R., Chopra, S., & LeCun Y. (2006). Dimensionality reduction by learning an invariant mapping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1735-1742.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "RASTA processing of speech", "authors": [ { "first": "H", "middle": [], "last": "Hermansky", "suffix": "" }, { "first": "N", "middle": [], "last": "Morgan", "suffix": "" } ], "year": 1994, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "2", "issue": "4", "pages": "578--589", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hermansky, H., & Morgan, N. (1994). RASTA processing of speech. IEEE Transactions on Speech and Audio Processing, 2(4), 578-589.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Should Recognizers Have Ears? Speech Communication", "authors": [ { "first": "H", "middle": [], "last": "Hermansky", "suffix": "" } ], "year": 1998, "venue": "", "volume": "25", "issue": "", "pages": "3--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hermansky, H. (1998). Should Recognizers Have Ears? Speech Communication, 25(1-3), 3-27.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "The AURORA experimental framework for the performance evaluations of speech recognition systems under noisy conditions", "authors": [ { "first": "H", "middle": [ "G" ], "last": "Hirsch", "suffix": "" }, { "first": "D", "middle": [], "last": "Pearce", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the ISCA ITRW ASR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirsch, H. G., & Pearce, D. (2000). The AURORA experimental framework for the performance evaluations of speech recognition systems under noisy conditions. In Proceedings of the ISCA ITRW ASR.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "A study of sub-band modulation spectrum compensation for robust speech recognition", "authors": [ { "first": "S.-Y", "middle": [], "last": "Huang", "suffix": "" }, { "first": "W.-H", "middle": [], "last": "Tu", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Hung", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ROCLING XXI: Conference on Computational Linguistics and Speech Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, S.-Y., Tu, W.-H., & Hung, J.-W. (2009). A study of sub-band modulation spectrum compensation for robust speech recognition. In Proceedings of the ROCLING XXI: Conference on Computational Linguistics and Speech Processing.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "On the importance of various modulation frequencies for speech recognition", "authors": [ { "first": "N", "middle": [], "last": "Kanedera", "suffix": "" }, { "first": "T", "middle": [], "last": "Arai", "suffix": "" }, { "first": "H", "middle": [], "last": "Hermansky", "suffix": "" }, { "first": "M", "middle": [], "last": "Pavel", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the European Conference on Speech Communication and Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kanedera, N., Arai, T., Hermansky, H., & Pavel, M. (1997). On the importance of various modulation frequencies for speech recognition. In Proceedings of the European Conference on Speech Communication and Technology.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Effective modulation spectrum factorization for robust speech recognition", "authors": [ { "first": "Y.-C", "middle": [], "last": "Kao", "suffix": "" }, { "first": "Y.-T", "middle": [], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "2724--2728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kao, Y.-C., Wang, Y.-T., & Chen, B. (2014). Effective modulation spectrum factorization for robust speech recognition. In Proceedings of the Annual Conference of the International Speech Communication Association, 2724-2728.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Learning the parts of objects by non-negative matrix factorization", "authors": [ { "first": "D", "middle": [ "D" ], "last": "Lee", "suffix": "" }, { "first": "H", "middle": [ "S" ], "last": "Seung", "suffix": "" } ], "year": 1999, "venue": "Nature", "volume": "401", "issue": "", "pages": "788--791", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, D. D., & Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature, 401, 788-791.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Algorithms for Non-negative Matrix Factorization", "authors": [ { "first": "D", "middle": [ "D" ], "last": "Lee", "suffix": "" }, { "first": "H", "middle": [ "S" ], "last": "Seung", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "556--562", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, D. D., & Seung, H. S. (2000). Algorithms for Non-negative Matrix Factorization. In Proceedings of the Annual Conference on Neural Information Processing Systems, 556-562.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "Exploring the use of speech features and their corresponding distribution characteristics for robust speech recognition", "authors": [ { "first": "S.-H", "middle": [], "last": "Lin", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Y.-M", "middle": [], "last": "Yeh", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on Audio, Speech and Language Processing", "volume": "17", "issue": "1", "pages": "84--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, S.-H., Chen, B., & Yeh, Y.-M. (2009). Exploring the use of speech features and their corresponding distribution characteristics for robust speech recognition. IEEE Transactions on Audio, Speech and Language Processing, 17(1), 84-94.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Evaluation of a noise-robust DSR front-end on Aurora databases", "authors": [ { "first": "D", "middle": [], "last": "Macho", "suffix": "" }, { "first": "L", "middle": [], "last": "Mauuary", "suffix": "" }, { "first": "B", "middle": [], "last": "No\u00e9", "suffix": "" }, { "first": "Y", "middle": [ "M" ], "last": "Cheng", "suffix": "" }, { "first": "D", "middle": [], "last": "Ealey", "suffix": "" }, { "first": "D", "middle": [], "last": "Jouvet", "suffix": "" }, { "first": "H", "middle": [], "last": "Kelleher", "suffix": "" }, { "first": "D", "middle": [], "last": "Pearce", "suffix": "" }, { "first": "F", "middle": [], "last": "Saadoun", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Macho, D., Mauuary, L., No\u00e9, B., Cheng, Y. M., Ealey, D., Jouvet, D., Kelleher, H., Pearce, D., & Saadoun, F. (2002). Evaluation of a noise-robust DSR front-end on Aurora databases. In Proceedings of the Annual Conference of the International Speech Communication Association.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Nonsmooth nonnegtive matrix facotorization (nsNMF)", "authors": [ { "first": "A", "middle": [], "last": "Pascual-Montano", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Carazo", "suffix": "" }, { "first": "K", "middle": [], "last": "Kochi", "suffix": "" }, { "first": "D", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "R", "middle": [ "D" ], "last": "Marqui", "suffix": "" } ], "year": 2006, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "28", "issue": "3", "pages": "403--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascual-Montano, A., Carazo, J. M., Kochi, K., Lehmann, D., & Pascual-Marqui, R. D. (2006). Nonsmooth nonnegtive matrix facotorization (nsNMF). IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(3), 403-415.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Modulation Spectrum Equalization for robust Speech Recognition", "authors": [ { "first": "L.-C", "middle": [], "last": "Sun", "suffix": "" }, { "first": "C.-W", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "L.-S", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2007, "venue": "Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, L.-C., Hsu, C.-W., & Lee, L.-S. (2007). Modulation Spectrum Equalization for robust Speech Recognition. In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Histogram equalization of speech representation for robust speech recognition", "authors": [ { "first": "A", "middle": [ "D L" ], "last": "Torre", "suffix": "" }, { "first": "A", "middle": [ "M J" ], "last": "Peinado", "suffix": "" }, { "first": "C", "middle": [], "last": "Segura", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Perez-Cordoba", "suffix": "" }, { "first": "M", "middle": [ "C" ], "last": "Benitez", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Rubio", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "13", "issue": "3", "pages": "355--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Torre, A. D. L., Peinado, A. M. J., Segura, C., Perez-Cordoba, J. L., Benitez, M. C., & Rubio, A. J. (2005). Histogram equalization of speech representation for robust speech recognition. IEEE Transactions on Speech and Audio Processing, 13(3), 355-366.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Segmental feature vector normalization for noise robust speech recognition", "authors": [ { "first": "A", "middle": [], "last": "Vikki", "suffix": "" }, { "first": "K", "middle": [], "last": "Laurila", "suffix": "" } ], "year": 1998, "venue": "Speech Communication", "volume": "25", "issue": "", "pages": "133--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikki, A., & Laurila, K. (1998), Segmental feature vector normalization for noise robust speech recognition. Speech Communication, 25, 133-147.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Normalization of the speech modulation spectra for robust speech recognition", "authors": [ { "first": "X", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "E", "middle": [ "S" ], "last": "Chng", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" } ], "year": 2008, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "16", "issue": "8", "pages": "1662--1674", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao, X., Chng, E. S., & Li, H. (2008). Normalization of the speech modulation spectra for robust speech recognition. IEEE Transactions on Speech and Audio Processing, 16(8), 1662-1674.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Scalable training of L 1-regularized log-linear Models", "authors": [ { "first": "G", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 24th international conference on Machine learning", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew, G., & Gao, J. (2007). Scalable training of L 1-regularized log-linear Models. In Proceedings of the 24th international conference on Machine learning. ACM, 33-40.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features", "authors": [ { "first": "M", "middle": [], "last": "Black", "suffix": "" }, { "first": "A", "middle": [], "last": "Katsamanis", "suffix": "" }, { "first": "B", "middle": [], "last": "Baucom", "suffix": "" }, { "first": "C", "middle": [], "last": "Lee", "suffix": "" }, { "first": "A", "middle": [], "last": "Lammert", "suffix": "" }, { "first": "A", "middle": [], "last": "Christensen", "suffix": "" }, { "first": "P", "middle": [], "last": "Georgiou", "suffix": "" }, { "first": "S", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2013, "venue": "Speech Communication", "volume": "55", "issue": "1", "pages": "1--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black, M., Katsamanis, A., Baucom, B., Lee, C., Lammert, A., Christensen, A., Georgiou, P., & Narayanan, S. (2013). Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features. Speech Communication, 55(1), 1-21.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Detecting real life anger", "authors": [ { "first": "F", "middle": [], "last": "Burkhardt", "suffix": "" }, { "first": "T", "middle": [], "last": "Polzehl", "suffix": "" }, { "first": "J", "middle": [], "last": "Stegmann", "suffix": "" }, { "first": "F", "middle": [], "last": "Metze", "suffix": "" }, { "first": "R", "middle": [], "last": "Huber", "suffix": "" } ], "year": 2009, "venue": "Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing", "volume": "", "issue": "", "pages": "4761--4764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burkhardt, F., Polzehl, T., Stegmann, J., Metze, F., & Huber, R. (2009). Detecting real life anger. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 4761-4764.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Couple and individual adjustment for 2 years following a randomized clinical trial comparing traditional versus integrative behavioral couple therapy", "authors": [ { "first": "A", "middle": [], "last": "Christensen", "suffix": "" }, { "first": "D", "middle": [ "C" ], "last": "Atkins", "suffix": "" }, { "first": "J", "middle": [], "last": "Yi", "suffix": "" }, { "first": "D", "middle": [ "H" ], "last": "Baucom", "suffix": "" }, { "first": "W", "middle": [ "H" ], "last": "George", "suffix": "" } ], "year": 2004, "venue": "J. Consult. Clin. Psychol", "volume": "72", "issue": "", "pages": "176--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christensen, A., Atkins, D.C., Yi, J., Baucom, D.H., & George, W.H. (2004). Couple and individual adjustment for 2 years following a randomized clinical trial comparing traditional versus integrative behavioral couple therapy. J. Consult. Clin. Psychol, 72, 176-191.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Integrative behavioral couple therapy", "authors": [ { "first": "A", "middle": [], "last": "Christensen", "suffix": "" }, { "first": "N", "middle": [ "S" ], "last": "Jacobson", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Babcock", "suffix": "" } ], "year": 1995, "venue": "Clinical Handbook of Marital Therapy", "volume": "", "issue": "", "pages": "31--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christensen, A., Jacobson, N.S., & Babcock, J.C. (1995). Integrative behavioral couple therapy. In: Jacobsen, N.S., Gurman, A.S. (Eds.), Clinical Handbook of Marital Therapy, second ed. Guilford Press, New York, 31-64.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Special issue of computer speech and language on affective speech in real-life interactions", "authors": [ { "first": "L", "middle": [], "last": "Devillers", "suffix": "" }, { "first": "N", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2011, "venue": "Comput. Speech Lang", "volume": "25", "issue": "", "pages": "1--3", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devillers, L., & Campbell, N. (2011). Special issue of computer speech and language on affective speech in real-life interactions. Comput. Speech Lang., 25, 1-3.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Similarity, convergence, and relationship satisfaction in dating and married couples", "authors": [ { "first": "G", "middle": [ "C" ], "last": "Gonzaga", "suffix": "" }, { "first": "B", "middle": [], "last": "Campos", "suffix": "" }, { "first": "T", "middle": [], "last": "Bradbury", "suffix": "" } ], "year": 2007, "venue": "J.Personal. Soc. Psychol", "volume": "93", "issue": "", "pages": "34--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gonzaga, G.C., Campos, B., & Bradbury, T. (2007). Similarity, convergence, and relationship satisfaction in dating and married couples. J.Personal. Soc. Psychol., 93, 34-48.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Couples interaction rating system 2 (CIRS2)., University of California", "authors": [ { "first": "C", "middle": [], "last": "Heavey", "suffix": "" }, { "first": "D", "middle": [], "last": "Gill", "suffix": "" }, { "first": "A", "middle": [], "last": "Christensen", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heavey, C., Gill, D., & Christensen, A. (2002). Couples interaction rating system 2 (CIRS2)., University of California, Los Angeles. Los Angeles, CA, USA.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Reducing the Dimensionality of Data with Neural Networks", "authors": [ { "first": "G", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2006, "venue": "Science", "volume": "", "issue": "5786", "pages": "504--507", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinton, G. (2006). Reducing the Dimensionality of Data with Neural Networks. Science, 313(5786), 504-507.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups", "authors": [ { "first": "G", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "D", "middle": [], "last": "Yu", "suffix": "" }, { "first": "G", "middle": [], "last": "Dahl", "suffix": "" }, { "first": "A", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "N", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2012, "venue": "IEEE Signal Process. Mag", "volume": "29", "issue": "6", "pages": "82--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinton, G., Deng, L., Yu, D., Dahl,G., Mohamed, A., Jaitly, N., et al. (2012). Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag., 29(6), 82-97.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Couples interaction study: Social support interaction rating system", "authors": [ { "first": "J", "middle": [], "last": "Jones", "suffix": "" }, { "first": "A", "middle": [], "last": "Christensen", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jones, J., & Christensen, A. (1998). Couples interaction study: Social support interaction rating system. University of California, Los Angeles. Los Angeles, CA, USA.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "The longitudinal course of marital quality and stability: A review of theory, methods, and research", "authors": [ { "first": "B", "middle": [ "R" ], "last": "Karney", "suffix": "" }, { "first": "T", "middle": [ "N" ], "last": "Bradbury", "suffix": "" } ], "year": 1995, "venue": "Psychol. Bull", "volume": "118", "issue": "", "pages": "3--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karney, B.R., & Bradbury, T.N. (1995). The longitudinal course of marital quality and stability: A review of theory, methods, and research. Psychol. Bull, 118, 3-34.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Reliability and diagnostic efficacy of parent's reports regarding children's exposure to martial aggression", "authors": [ { "first": "M", "middle": [], "last": "O'brian", "suffix": "" }, { "first": "R", "middle": [ "S" ], "last": "John", "suffix": "" }, { "first": "G", "middle": [], "last": "Margolin", "suffix": "" }, { "first": "O", "middle": [], "last": "Erel", "suffix": "" } ], "year": 1994, "venue": "Violence and Victims", "volume": "9", "issue": "1", "pages": "45--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "O'Brian, M.,John, R.S., Margolin, G., & Erel, O. (1994). Reliability and diagnostic efficacy of parent's reports regarding children's exposure to martial aggression. Violence and Victims, 9(1), 45-62.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Distributed machine learning and sparse representations", "authors": [ { "first": "O", "middle": [], "last": "Obst", "suffix": "" } ], "year": 2014, "venue": "Neurocomputing", "volume": "124", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Obst, O. (2014). Distributed machine learning and sparse representations. Neurocomputing, 124, 1.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Machine recognition of Hand written Characters using neural networks", "authors": [ { "first": "Y", "middle": [], "last": "Perwej", "suffix": "" }, { "first": "A", "middle": [], "last": "Chaturvedi", "suffix": "" } ], "year": 2011, "venue": "International Journal of Computer Applications", "volume": "14", "issue": "2", "pages": "6--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Perwej, Y., & Chaturvedi, A. (2011). Machine recognition of Hand written Characters using neural networks. International Journal of Computer Applications, 14(2), 6-9.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network", "authors": [ { "first": "N", "middle": [], "last": "Rubanov", "suffix": "" } ], "year": 2000, "venue": "IEEE Trans. Neural Netw", "volume": "11", "issue": "2", "pages": "295--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rubanov, N. (2000). The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network. IEEE Trans. Neural Netw., 11(2), 295-305.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "The relevance of feature type for automatic classification of emotional user states: Low level descriptors and functionals", "authors": [ { "first": "B", "middle": [], "last": "Schuller", "suffix": "" }, { "first": "A", "middle": [], "last": "Batliner", "suffix": "" }, { "first": "D", "middle": [], "last": "Seppi", "suffix": "" }, { "first": "S", "middle": [], "last": "Steidl", "suffix": "" }, { "first": "T", "middle": [], "last": "Vogt", "suffix": "" }, { "first": "J", "middle": [], "last": "Wagner", "suffix": "" } ], "year": 2007, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "2253--2256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schuller, B., Batliner, A., Seppi, D., Steidl, S., Vogt, T., Wagner, J., et al. (2007). The relevance of feature type for automatic classification of emotional user states: Low level descriptors and functionals. In Proc. Interspeech, Antwerp, Belgium, 2253-2256.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Comparison of Regularization Methods for ImageNet Classification with Deep Convolutional Neural Networks. AASRI Procedia", "authors": [ { "first": "E", "middle": [], "last": "Smirnov", "suffix": "" }, { "first": "D", "middle": [], "last": "Timoshenko", "suffix": "" }, { "first": "S", "middle": [], "last": "Andrianov", "suffix": "" } ], "year": 2014, "venue": "", "volume": "6", "issue": "", "pages": "89--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smirnov, E., Timoshenko, D., & Andrianov, S. (2014). Comparison of Regularization Methods for ImageNet Classification with Deep Convolutional Neural Networks. AASRI Procedia, 6, 89-94.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "The individuals listed below are reviewers of this journal during the year of 2015. The IJCLCLP Editorial Board extends its gratitude to these volunteers for their important contributions to this publication, to our association, and to the profession", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The individuals listed below are reviewers of this journal during the year of 2015. The IJCLCLP Editorial Board extends its gratitude to these volunteers for their important contributions to this publication, to our association, and to the profession. Guo-Wei Bian", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Publishing pertinent journals, proceedings and newsletters", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Publishing pertinent journals, proceedings and newsletters.", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Maintaining contact with international computational linguistics academic organizations", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maintaining contact with international computational linguistics academic organizations.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Dealing with various other matters related to the development of computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dealing with various other matters related to the development of computational linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Extractive Spoken Document Summarization with Representation Learning Techniques].................................................................................\u2026.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "The block diagram of the proposed Math Word Problem Solver.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "A simple problem and its essential corresponding logic forms.", "uris": null }, "FIGREF3": { "type_str": "figure", "num": null, "text": "An example for deriving new facts.", "uris": null }, "FIGREF4": { "type_str": "figure", "num": null, "text": ")=theme({lose|\u5931\u53bb}); lose\u2192theme({give| \u7d66})=possession({lose|\u5931\u53bb}); obtain\u2192theme({give| \u7d66})=possession({obtain|\u5f97\u5230}); obtain\u2192target({give| \u7d66})=theme({obtain|\u5f97\u5230}); receive\u2192target({give| \u7d66})=agent({receive|\u6536\u53d7}); receive\u2192theme({give| \u7d66})=possession({receive|\u6536\u53d7})", "uris": null }, "FIGREF5": { "type_str": "figure", "num": null, "text": "The conflation events of the verb \"give (\u7d66)\".", "uris": null }, "FIGREF6": { "type_str": "figure", "num": null, "text": "(a) Math Word Problem Solver Diagram (b) Problem Resolution Diagram The block diagram of the proposed Math Word Problem Solver.", "uris": null }, "FIGREF7": { "type_str": "figure", "num": null, "text": "a). Facts Generation Figure 4(b). Reasoning Chain (represented as an and EG Tree Builder Explanation Tree for illustration) Figure 4(c). Function Word Insertion & Ordering Module, serving as the Surface Realizer. It shows how surface realization is done with pre-specified function words (circled by ellipses) and extracted slot-fillers (enclosed by diamond for operator, and rectangle for quantities).", "uris": null }, "FIGREF8": { "type_str": "figure", "num": null, "text": "(a) Facts Generated from the Body Text. (b) The associated Reasoning Chain, where \"G#\" shows the facts grouped within the same sentence. (c) Explanation texts generated by the TG for this example (labeled as G1~G4). Except those ellipses which symbolize pre-specified function words, other shapes denote extracted slot-fillers. Furthermore, Diamond symbolizes OP_node while Rectangle symbolizes Quan_node.", "uris": null }, "FIGREF9": { "type_str": "figure", "num": null, "text": "Explanation Tree for Discourse Planning, where S2 means that those facts are from the 2 nd body sentence.", "uris": null }, "FIGREF10": { "type_str": "figure", "num": null, "text": "a). Surface Realizer -OP_SUM template Figure 6(b). Benchmark for the output of Surface Realizer", "uris": null }, "FIGREF11": { "type_str": "figure", "num": null, "text": "The template for OP_SUM (\"SUM\" inFigure 6 (a), and the explanation sentences for Sample-1)", "uris": null }, "FIGREF12": { "type_str": "figure", "num": null, "text": "Figure 7(a)", "uris": null }, "FIGREF13": { "type_str": "figure", "num": null, "text": "The template for OP_MUL (\"MUL\" inFigure 7 (a)) and the explanation sentences for Sample-1.", "uris": null }, "FIGREF14": { "type_str": "figure", "num": null, "text": "The template for OP_COMMON_DIV (\"CMN_DIV\" in Figure 8 (a)) and the explanation sentences for Sample-2. Explanation Generation for a Math Word Problem Solver 41 [Sample-3] \u4e00\u8258\u8f2a\u8239 20 \u5206\u9418\u53ef\u4ee5\u884c\u99db 25 \u516c\u91cc\uff0c2.5 \u5c0f\u6642\u53ef\u4ee5\u884c\u99db\u591a\u5c11\u516c\u91cc\uff1f (A ship can travel 25 km in 20 minutes. How many kilometers can it travel for 2", "uris": null }, "FIGREF15": { "type_str": "figure", "num": null, "text": "The number of documents in each class", "uris": null }, "FIGREF17": { "type_str": "figure", "num": null, "text": "The Classification Results on Twitter2011 dataset", "uris": null }, "FIGREF19": { "type_str": "figure", "num": null, "text": "The Classification Results on ETtoday dataset Table 4. The top-10 topic words of the \"baseball\" topic in ETtoday News Title dataset Top-10 Topic words LDA \u4e2d\u8077 (baseball game in Taiwan), \u6708 (month), \u842c (ten thousand), \u5e74 (year), \u5927 (big), \u5143 (dollars), \u5433\u8a8c\u63da (a politician), \u81fa\u5317 (Taipei), \u81fa\u7063 (Taiwan), \u5e74\u7d42 (Year-end bonuses) Mix \u4e2d\u8077, \u65e5 (day), \u81fa\u7063, \u5927, \u82f1\u96c4 (hero), \u806f\u76df (league baseball), \u4e16\u754c (world), \u68d2\u7403 (baseball), \u4e0d (no), \u6311\u6230 (challenge) BTM \u4e2d \u8077 , \u7fa9 \u5927 (a baseball team), \u5144 \u5f1f (a baseball team), MLB, \u7d71 \u4e00 (a baseball team), \u5e74, \u6843\u733f (a baseball team), \u842c, \u7345 (a baseball team), \u4eba (human) PMI-\uf062-BTM \u4e2d \u8077 , MLB, \u5144 \u5f1f , \u65e5 \u8077 (baseball game in Japan), \u68d2 \u7403 , \u6843 \u733f , \u5148 \u767c (Starting Pitcher), \u7e3d\u51a0\u8ecd (champion), \u9673\u5049\u6bb7 (a Taiwanese professional baseball pitcher), \u7d71\u4e00 (a baseball team)", "uris": null }, "FIGREF21": { "type_str": "figure", "num": null, "text": "\uff0c\u4ee5\u589e\u9032\u8a13\u7df4\u904e\u7a0b\u4e2d\u53c3\u6578\u4f30\u6e2c\u7684\u6548\u80fd\u3002 3.2 \u8a9e\u53e5\u8868\u793a\u6cd5(Sentence Representation) \u96d6\u7136\u8a5e\u8868\u793a\u6cd5\u5df2\u88ab\u5ee3\u6cdb\u4f7f\u7528\uff0c\u4f46\u8a31\u591a\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u7684\u76f8\u95dc\u4efb\u52d9\u6240\u9700\u8981\u7684\u662f\u8a9e\u53e5\u7684\u8868\u793a\u6cd5\u3002 \u5ef6\u7e8c\u8a5e\u8868\u793a\u6cd5\u7684\u57fa\u672c\u6a21\u578b\u67b6\u69cb\u8207\u7cbe\u795e\uff0c\u5b78\u8005 Le \u8207 Mikolov \u63d0\u51fa\u5169\u7a2e\u5b78\u7fd2\u8a9e\u53e5\u8868\u793a\u6cd5\u7684\u6a21 \u578b\uff0c\u5206\u5225\u662f\u5206\u6563\u5f0f\u5132\u5b58\u6a21\u578b\u8207\u5206\u6563\u5f0f\u8a5e\u888b\u6a21\u578b(Le & Mikolov, 2014)\u3002 A. \u5206\u6563\u5f0f\u5132\u5b58\u6a21\u578b(Distributed Memory Model of Paragraph Vector, PV-DM) \u5206\u6563\u5f0f\u5132\u5b58\u6a21\u578b(PV-DM)\u985e\u4f3c\u65bc\u9023\u7e8c\u578b\u8a5e\u888b\u6a21\u578b\u3002PV-DM \u540c\u6a23\u4ee5\u6700\u5927\u5316\u76ee\u6a19\u4e2d\u9593\u8a5e\u8f38\u51fa \u7684\u6a5f\u7387\u70ba\u76ee\u6a19\uff0c\u5176\u4e3b\u8981\u5dee\u7570\u70ba\uff1a(1)\u8a13\u7df4\u904e\u7a0b\u4e2d\u65bc\u8f38\u5165\u5c64(Input Layer)\u5f15\u5165\u4e00\u500b\u6bb5\u843d\u7de8\u865f (Paragraph ID)\uff0c\u4ea6\u5373\u8a13\u7df4\u8a9e\u6599\u4e2d\u6bcf\u4e00\u8a9e\u53e5\u7686\u6709\u4e00\u500b\u552f\u4e00\u7684\u6bb5\u843d\u7de8\u865f\u3002\u6bb5\u843d\u7de8\u865f\u8207\u4e00\u822c\u7684", "uris": null }, "FIGREF22": { "type_str": "figure", "num": null, "text": "Yao-Ting Sung, and Jia-Fei Hong. Automatically Detecting Syntactic Errors in Sentences Writing by Learners of Chinese as a Liang, Kuang-Yi Hsu, Chien-Tsung Huang, Shen-Yun Miao, Wei-Yun Ma, Lun-Wei Ku, Churn-Jung Liau and Keh-Yih Su. Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation; 20(2): 1-26 see Huang, ChienTsung, 20(2): 27-Ya-Ming Shen, and Chia-Hou Wu. Cross-Linguistic Error Types of Misused Chinese Based on Learners' Corpora; 20(1Howard Hao-Jan Chen, and Hui-Mei Yang. The Error Analysis of \"Le\" Based on \"Chinese Learner Written Corpus\"; 20(1A Study on Chinese Spelling Check Using Confusion Sets and N-gram Statistics; Lin, C.-J., 20(1): 23-48 Chinese Spelling Corretion HANSpeller: A Unified Framework for Chinese Spelling Correction; Xiong, J., 20(1): 1-22", "uris": null }, "TABREF0": { "text": "\u5c0f\u8c6a \u6709 62 \u5f35 \u8cbc\u7d19 \uff0c \u54e5\u54e5 \u518d \u7d66 \u4ed6 56 \u5f35 \uff0c \u5c0f\u8c6a \u73fe\u5728 \u5171 \u6709 \u5e7e\u5f35 \u8cbc\u7d19 \uff1f (Xiaohao had 64 stickers, and his brother gave him 56 more. How many stickers does Xiahao", "num": null, "type_str": "table", "html": null, "content": "
Designing a Tag-Based Statistical Math Word Problem Solver9
with Reasoning and Explanation
have now?)
\u5c0f\u8c6a\u6709 62 \u5f35\u8cbc\u7d19\uff0c\u54e5\u54e5\u518d\u7d66\u4ed6 56 \u5f35\uff0c\u5c0f\u8c6a\u73fe\u5728\u5171\u6709\u5e7e\u5f35\u8cbc\u7d19\uff1f
{\u6709(2):
theme={[x1]\u5c0f\u8c6a(1)},
range={\u8cbc\u7d19
" }, "TABREF2": { "text": "lists the utilities provided by the IE. The first one, as we have just described, returns the sum of the values of FOL function instances which can be unified with the function argument and satisfy the condition argument. The Addition utility simply returns the value of \"value 1 +value 2 \", where value i is either a constant number, or an FOL function value, or a value returned by a utility. Likewise, Subtraction and Multiplication utilities return", "num": null, "type_str": "table", "html": null, "content": "
Designing a Tag-Based Statistical Math Word Problem Solver15
with Reasoning and Explanation
" }, "TABREF3": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
Sum(function, condition)=valueCommonDiv(value 1 , value 2 )=value
Addition(value 1 , value 2 )=valueFloorDiv(value 1 , value 2 )=value
Subtraction(value 1 , value 2 )=valueCeilDiv(value 1 , value 2 )=value
Difference(value 1 , value 2 )=valueSurplus(value 1 , value 2 )=value
Multiplication(value 1 , value 2 )=value
Solving MWPs may require deriving new facts according to common sense or domain
knowledge. In
" }, "TABREF6": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
Corpus Training SetNum. of problems 20,093CorpusAvg. Chinese Chars.Avg. Chinese Words
Develop Set1,700Body2718.2
Test Set1,700Question9.46.8
Total23,493
MWP corpus statisticsAverage length per problem
" }, "TABREF11": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
2013;
General Documentsstream Topics 0.15
Twitch Plays Pok\u00e9mon is a social experiment and channel on the videochannel 0.45 video 0.40
streaming website Twitch, consisting of
The concept was developed by ... a crowdsourced attempt to play Game Freak's and Nintendo's Pok\u00e9mon video games by parsing commands sent by users through the channel's chat room.Topic distribution ...... social experiment 0.15 0.12 crowdsource 0.53
twitch0.12
game0.35
video0.53
apple0.45
David @GuysWithPride This is an apple. HAAAbanana 0.25 fruit 0.15
... Topic distributionfood chicken 0.36 0.13 ...
...
haaa 0.12
hi0.35
noooo 0.53
" }, "TABREF13": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
PropertyTwitter2011ETtoday News title
The number of documents49,46117,814
The number of domains5025
The number of distinct words30,42131,217
Avg. words per document5.929.25
" }, "TABREF14": { "text": "RI penalizes both true positive and true negative decisions during clustering. If two documents are both in the same class and the same cluster, or both in different classes and different clusters, this decision is correct. For other cases, the decision is false. The equation of RI shows following:", "num": null, "type_str": "table", "html": null, "content": "
Word Co-occurrence Augmented Topic Model in Short Text57
I( ,C) \uf057 \uf03dk j \uf0e5\uf0e5P( \uf076k\uf0c7cjP( )log P( ) P( ) ) k j k j c c \uf076 \uf076 \uf0c7,(11)
H( ) \uf057 \uf03d \uf02d \uf0e5 kP( )logP( ) k k \uf076 \uf076.(12)
\uf0b7 Rand Index
Rand Index (RI) (Rand, 1971) consider the clustering result as a pair-wise decision. More
clearly,
" }, "TABREF15": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
Model\uf062 priorsPurityNMIRI
LDA<0.100> PCA-\uf0620.4174 0.43480.3217 0.33250.9127 0.9266
Mix<0.100> PCA-\uf0620.4217 0.37480.3358 0.33050.8687 0.7550
<0.100>0.43180.34290.9092
BTMPCA-\uf0620.43670.40000.8665
PMI-\uf0620.44270.39270.9284
" }, "TABREF16": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
LDAjob, house, jay, steal, material, burglary, construct, park, pick, ur
Mixjob, robbery, material, construct, steal, warehouse, emote, feel, woman, does
" }, "TABREF23": { "text": "SD \u7684\u5be6\u9a57\u4e2d\uff0cBM25 \u53cd\u800c\u8d85\u8d8a RM \u6210\u70ba\u6240\u6709\u6a21\u578b\u4e2d\u6700\u4f73\u7684\u6458\u8981\u65b9\u6cd5\uff0c\u6211\u5011\u8a8d\u70ba \u9019\u53ef\u80fd\u662f\u56e0\u70ba RM \u4e2d\u6240\u4f7f\u7528\u7684\u8a9e\u53e5\u6a21\u578b\u53d7\u5230\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\uff0c\u56e0\u6b64\u964d\u4f4e\u5c0b\u627e\u6709\u6548\u7684 \u865b\u64ec\u95dc\u806f\u6587\u4ef6(Pseudo Relevant Documents)\u7684\u80fd\u529b\u3002\u6b64\u5916\uff0cTW-IDF \u8207 MRW \u7684\u6458\u8981\u6548\u80fd \u7686\u8f03 LSA \u53ca MMR \u5dee\uff0c\u6211\u5011\u8a8d\u70ba\u4ea6\u662f\u53d7\u5230\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\uff0c\u56e0\u4e00\u500b\u8a5e\u6216\u662f\u4e00\u500b\u8a9e\u53e5 \u7684\u91cd\u8981\u6027\u5206\u6578\u662f\u4f86\u81ea\u9130\u8fd1\u5176\u5b83\u8a5e\u6216\u662f\u8a9e\u53e5\u7684\u8ca2\u737b\u3002\u800c LEAD \u7121\u8ad6\u5728 TD \u6216\u662f SD \u4e0a\uff0c\u76f8 \u8f03\u65bc\u5176\u5b83\u6a21\u578b\u7686\u5f97\u5230\u8f03\u5dee\u7684\u6548\u679c\uff0c\u4e3b\u8981\u539f\u56e0\u662f LEAD \u50c5\u9069\u7528\u65bc\u7279\u6b8a\u6587\u4ef6\u7d50\u69cb\uff0c\u56e0\u6b64\u82e5\u6458 \u8981\u6587\u4ef6\u4e0d\u5177\u6709\u67d0\u7a2e\u7279\u6b8a\u7684\u7d50\u69cb\uff0c\u5176\u6458\u8981\u6548\u80fd\u5c31\u6703\u6709\u6240\u4fb7\u9650\u3002 \u3002\u5728 TD \u5be6\u9a57\u4e2d\uff0cCBOW \u6458\u8981\u6548\u80fd\u8f03 BM25 \u5dee\uff0c\u800c SG \u672a\u9054\u5230 MRW \u7684\u6c34\u5e73\u3002 \u5728 SD \u5be6\u9a57\u4e2d\uff0c\u4ecd\u7136\u4ee5 BM25 \u7684\u6458\u8981\u6548\u679c\u70ba\u4f73\u3002", "num": null, "type_str": "table", "html": null, "content": "
Best Match 25(BM25)\u3001\u8a5e\u6b0a\u91cd-\u9006\u5411\u6587\u4ef6\u983b\u7387(TW-IDF)\u4ee5\u53ca\u99ac\u53ef\u592b \u96a8\u6a5f\u6f2b\u6b65(MRW)\u3002\u9996\u5148\u5728 TD \u7684\u5be6\u9a57\u4e2d\uff0cRM \u7684\u6458\u8981\u6548\u679c\u662f\u6240\u6709\u6a21\u578b\u4e2d\u6700\u4f73\u7684\uff0c\u8868\u793a\u4f7f \u7528\u984d\u5916\u7684\u95dc\u806f\u6587\u4ef6\u53ef\u4ee5\u6709\u6548\u5730\u5f4c\u88dc\u8a9e\u53e5\u5167\u5bb9\u7684\uf967\u8db3\uff0c\u63d0\u9ad8\u8a9e\u53e5\u7684\u4f30\u6e2c\u80fd\u529b\u3002\u5176\u6b21\u70ba BM25\uff0c \u6211\u5011\u8a8d\u70ba\u5728\u6587\u4ef6\u6458\u8981\u7684\u554f\u984c\u4e2d\uff0c\u8a5e\u5f59\u7684\u983b\u7387(TF)\u3001\u53cd\u6587\u4ef6\u983b\u7387(IDF)\u4ee5\u53ca\u6587\u4ef6\u9577\u5ea6\u7684\u6b63\u898f \u5316(Normalized)\u662f\u91cd\u8981\u4e14\u4e0d\u53ef\u6216\u7f3a\u7684\u7279\u5fb5\u8cc7\u8a0a\u3002ULM \u7121\u8ad6\u5728 TD \u6216\u662f SD \u4e0a\u7684\u6458\u8981\u6210\u6548\u7686 \u65bd\u51f1\u6587 \u7b49 \u512a\u65bc\u5716\u8ad6\u5f0f\u6a21\u578b TW-IDF \u8207 MRW\u3002TW-IDF \u5728\u8a08\u7b97\u8a5e\u983b(TF)\u6642\uff0c\u591a\u8003\u616e\u4e86\u4e0a\u4e0b\u6587(Context) \u7684\u8cc7\u8a0a\uff0c\u800c MRW \u5728\u8a08\u7b97\u91cd\u8981\u8a9e\u53e5\u6642\uff0c\u9664\u4e86\u4f7f\u7528\u5176\u5b83\u8a9e\u53e5\u7684\u5206\u6578\u4e4b\u5916\uff0c\u4ea6\u8003\u616e\u5230\u8a9e\u53e5\u5f7c \u6b64\u4e4b\u9593\u7684\u76f8\u95dc\u5ea6\u4f5c\u70ba\u6b0a\u91cd\u4f86\u8abf\u6574\uff0c\u56e0\u6b64\u5169\u8005\u6548\u679c\u7686\u6703\u8f03\u50c5\u8003\u616e\u8a5e\u983b\u7684 VSM \u70ba\u4f73\u3002MMR \u5728\u9032\u884c\u8a9e\u53e5\u9078\u53d6\u6642\u591a\u8003\u616e\u4e86\u5197\u9918\u8cc7\u8a0a\uff0c\u56e0\u6b64\u6458\u8981\u6548\u679c\u8f03 VSM \u4f73\u3002 \u8868 2. \u57fa\u790e\u5be6\u9a57\u65bc\u6587\u5b57\u6587\u4ef6\u8207\u8a9e\u97f3\u6587\u4ef6\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L LEAD 0.312 0.196 0.278 0.254 0.117 0.220 VSM 0.347 0.228 0.290 0.343 0.189 0.288 MMR 0.365 0.242 0.316 0.360 0.206 0.309 LSA 0.362 0.233 0.316 0.345 0.201 0.301 ULM 0.411 0.299 0.362 0.364 0.218 0.313 RM 0.458 0.345 0.408 0.384 0.236 0.330 BM25 0.422 0.317 0.380 0.394 0.251 0.341 TW-IDF 0.374 0.260 0.317 0.322 0.164 0.270 MRW 0.415 0.296 0.357 0.339 0.194 0.289 LSA \u5728\u6f5b\u85cf\u8a9e\u610f\u7a7a\u9593\u8a08\u7b97\u6587\u4ef6\u8207\u8a9e\u53e5\u7684\u9918\u5f26\u76f8\u4f3c\u5ea6\uff0c\u5176\u7d50\u679c\u4ea6\u986f\u793a\u8f03 VSM \u70ba\u4f73\u3002 \u800c VSM \u6bcf\u500b\u8a5e\u5f59\u6240\u69cb\u6210\u7684\u5411\u91cf\u7dad\u5ea6\u7686\u70ba\u7368\u7acb\uff0c\u56e0\u6b64\u7121\u6cd5\u5f97\u77e5\u51fa\u6587\u4ef6\u4e2d\u8a5e\u5f59\u4e4b\u9593\u7684\u95dc\u806f \u6027\uff0c\u4f7f\u5f97\u9032\u884c\u6587\u4ef6\u76f8\u4f3c\u5ea6\u7684\u6bd4\u5c0d\u6642\u53ef\u80fd\u9020\u6210\u8aa4\u5224\u7684\u60c5\u6cc1\u3002 \u5728 7.2 \u8a5e\u8868\u793a\u6cd5\u8207\u8a9e\u53e5\u8868\u793a\u6cd5\u65bc\u7bc0\u9304\u5f0f\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4e4b\u5be6\u9a57\u7d50\u679c \u5728\u6b64\u6211\u5011\u5229\u7528\u76ee\u524d\u5169\u7a2e\u6700\u5148\u9032\u7684\u8a5e\u8868\u793a\u6cd5\u2500\u9023\u7e8c\u578b\u8a5e\u888b\u6a21\u578b(CBOW)\u548c\u8df3\u8e8d\u5f0f\u6a21\u578b(SG)\uff0c \u8207\u6700\u5148\u9032\u7684\u5169\u7a2e\u8a9e\u53e5\u8868\u793a\u6cd5\u2500\u5206\u6563\u5f0f\u5132\u5b58\u6a21\u578b(PV-DM) \u548c\u5206\u6563\u5f0f\u8a5e\u888b\u6a21\u578b(PV-DBOW) \u4e4b\u6280\u8853\u4f86\u5f9e\u4e8b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\uff1b\u5be6\u9a57\u5171\u5206\u4e09\u7d44\u4f86\u9032\u884c\uff0c\u5206\u5225\u7d50\u5408\u65bc\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine \u95dc\u806f\u7279\u5fb5 0.389 0.254 0.332 0.355 0.200 0.300 \u8868\u793a\u6cd5\u2500\u9023\u7e8c\u578b\u8a5e\u888b\u6a21\u578b(CBOW)\u548c\u8df3\u8e8d\u5f0f\u6a21\u578b(SG)\uff0c\u4ee5\u53ca\u5169\u7a2e\u8a9e\u53e5\u8868\u793a\u6cd5\u2500\u5206\u6563\u5f0f\u5132 \u4e4b\u4f9d\u64da\u3002 \u8a5e\u5f59\u7279\u5fb5 0.362 0.237 0.311 0.298 0.176 0.266 \u8981\u8207\u62bd\u8c61\u5f0f\u6458\u8981\uff0c\u672c\uf941\u6587\u65e8\u5728\u63a2\u8a0e\u7bc0\u9304\u5f0f\u4e2d\u6587\u5ee3\u64ad\u65b0\u805e\u6587\u4ef6\u6458\u8981\u65b9\u6cd5\u3002\u6211\u5011\u63d0\u51fa\u5169\u7a2e\u8a5e Similarity)\u3001\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65(MRW)\u4ee5\u53ca\u6587\u4ef6\u76f8\u4f3c\u5ea6\u91cf\u503c(DLM)\u7684\u65b9\u6cd5\u4f5c\u70ba\u6311\u9078\u6458\u8981\u8a9e\u53e5 \u7bc0\u9304\u5f0f\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4f7f\u7528\u8868\u793a\u6cd5\u5b78\u7fd2\u6280\u8853 79 \u8868 3. \u8a5e\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u9918\u5f26\u76f8\u4f3c\u5ea6\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L CBOW 0.402 0.280 0.349 0.377 0.228 0.327 SG 0.401 0.265 0.347 0.361 0.214 0.312 \u9996\u5148\uff0c\u6211\u5011\u5c07\u8a5e\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u9918\u5f26\u76f8\u4f3c\u5ea6(Cosine Similarity)\u4f5c\u70ba\u9078\u53d6\u6458\u8981\u8a9e\u53e5\u7684\u65b9 \u6cd5\uff0c\u5176\u7d50\u679c\u793a\u65bc\u8868 3\u3002\u5f9e\u5be6\u9a57\u7d50\u679c\u4e2d\u89c0\u5bdf\u5230\uff0c\u7531\u65bc\u9019\u5169\u7a2e\u8a5e\u8868\u793a\u6cd5\u5404\u6709\u8457\u4e0d\u540c\u7684\u6a21\u578b\u7d50 \u69cb\u8207\u5b78\u7fd2\u65b9\u5f0f\uff0c\u56e0\u6b64\u5728\u6587\u5b57\u6587\u4ef6(TD)\u6216\u662f\u8a9e\u97f3\u6587\u4ef6(SD)\u4e2d\uff0c\u8a72\u5169\u7a2e\u6a21\u578b\u7684\u6458\u8981\u6210\u6548\u6709\u7a0d \u5fae\u7684\u5dee\u7570\u3002\u6839\u64da TD \u7684\u7d50\u679c\u986f\u793a\uff0cCBOW \u7684\u6458\u8981\u6548\u80fd\u8f03 SG \u4f73\uff0c\u5728 SD \u4e2d\u4ecd\u4fdd\u6301\u76f8\u540c\u7684 \u60c5\u6cc1\u3002\u5118\u7ba1\u8a72\u5169\u7a2e\u8a5e\u8868\u793a\u6cd5\u7686\u512a\u65bc\u5411\u91cf\u7a7a\u9593\u6a21\u578b(VSM)\u8207\u6f5b\u85cf\u8a9e\u610f\u5206\u6790(LSA)\uff0c\u537b\u50c5\u9054\u5230 \u8a5e\u6b0a\u91cd-\u9006\u5411\u6587\u4ef6\u983b\u7387(TW-IDF)\u5dee\u4e0d\u591a\u7684\u6c34\u5e73\uff0c\u800c\u4e14\u5728 SD \u7684\u60c5\u6cc1\u4e0b\u7684\u8868\u73fe SG \u4e0d\u53ca\u55ae\u9023 \u8a9e\u8a00\u6a21\u578b(ULM)(\u8868 2)\u3002 \u8868 4. \u8a9e\u53e5\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u9918\u5f26\u76f8\u4f3c\u5ea6\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L PV-DM 0.429 0.313 0.382 0.387 0.236 0.335 PV-DBOW 0.398 0.277 0.348 0.368 0.227 0.329 \u540c\u6a23\u5730\uff0c\u6211\u5011\u5c07\u8a9e\u53e5\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u9918\u5f26\u76f8\u4f3c\u5ea6\u4f5c\u70ba\u9078\u53d6\u6458\u8981\u8a9e\u53e5\u7684\u65b9\u6cd5\uff0c\u5176\u7d50\u679c\u793a \u65bc\u8868 4\u3002\u5728 TD \u7684\u7d50\u679c\u4e2d\uff0cPV-DM \u8207 PV-DBOW \u8a72\u5169\u7a2e\u8a9e\u53e5\u8868\u793a\u6cd5\u7684\u6458\u8981\u6548\u679c\u5206\u5225\u8d85\u8d8a CBOW \u53ca SG \u8a5e\u8868\u793a\u6cd5\u6a21\u578b(\u8868 3) \u3002PV-DM \u6458\u8981\u6210\u6548\u8f03\u50b3\u7d71\u7684\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65(MRW)\u4f73\uff0c \u4f46\u8f03 BM25 \u5dee\u3002\u800c\u5728 SD \u7684\u7d50\u679c\u4e2d\uff0c\u5169\u7a2e\u8a9e\u53e5\u8868\u793a\u6cd5\u7684\u6458\u8981\u6210\u6548\u6bd4\u8d77\u8a5e\u8868\u793a\u6cd5\u6c92\u6709\u592a\u5927 \u7684\u9032\u6b65\uff0c\u6211\u5011\u8a8d\u70ba\u8a9e\u53e5\u8868\u793a\u6cd5\u642d\u914d\u9918\u5f26\u76f8\u4f3c\u5ea6\u9078\u53d6\u8a9e\u53e5\u7684\u65b9\u5f0f\u4ea6\u53d7\u8a9e\u97f3\u8fa8\u8b58\u7684\u5f71\u97ff\u3002 \u8868 5. \u8a5e\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L CBOW 0.436 0.310 0.384 0.393 0.246 0.346 SG 0.316 0.283 0.351 0.372 0.233 0.325 \u5728\u7b2c\u4e8c\u7d44\u5be6\u9a57\u4e2d\uff0c\u6211\u5011\u5c07\u8a5e\u8868\u793a\u6cd5\u7d50\u5408\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65(MRW)\u4ee5\u5c0d\u8a9e\u53e5\u9032\u884c\u9078\u53d6\uff0c \u5176\u7d50\u679c\u5448\u73fe\u5728\u8868 5\u3002\u5f9e\u7d50\u679c\u4e2d\u53ef\u4ee5\u89c0\u5bdf\u5230\uff0c\u7121\u8ad6\u5728 TD \u6216\u662f SD \u4e0a\uff0c\u76f8\u8f03\u65bc\u540c\u6a23\u4ee5\u8a5e\u8868\u793a \u6cd5\u7684\u6280\u8853\u7d50\u5408\u9918\u5f26\u76f8\u4f3c\u5ea6\u7684\u65b9\u6cd5\uff0c\u4f7f\u7528\u8a72\u65b9\u6cd5\u6311\u9078\u8a9e\u53e5\u7684\u6458\u8981\u6210\u6548\u7686\u512a\u65bc\u4ee5\u9918\u5f26\u76f8\u4f3c\u5ea6 \u8868 6. \u8a9e\u53e5\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) PV-DM 0.446 0.343 0.400 0.395 0.253 0.347 PV-DBOW 0.451 0.336 0.398 0.387 0.243 0.337 \u540c\u6a23\u5730\uff0c\u6211\u5011\u4ee5\u8a9e\u53e5\u8868\u793a\u6cd5\u7d50\u5408\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65(MRW)\u5c0d\u8a9e\u53e5\u9032\u884c\u9078\u53d6\uff0c\u5176\u7d50\u679c\u5c55 \u793a\u65bc\u8868 6\u3002\u5f9e\u7d50\u679c\u4e2d\u767c\u73fe\u5230\uff0c\u7121\u8ad6\u5728 TD \u6216\u662f SD \u4e0a\uff0c\u8a72\u65b9\u6cd5\u7684\u6458\u8981\u6210\u6548\uff0c\u986f\u8457\u5730\u512a\u8d8a\u4ee5 \u8a5e\u3001\u8a9e\u53e5\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u9918\u5f26\u76f8\u4f3c\u5ea6(\u8868 3 \u548c 4)\u4e4b\u9078\u53d6\u6458\u8981\u8a9e\u53e5\u65b9\u6cd5\uff0c\u4ea6\u8d85\u8d8a\u4ee5\u8a5e\u8868\u793a\u6cd5\u7d50 \u5408\u65bc\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65\u7684\u65b9\u5f0f(\u8868 5)\u3002\u5728 TD \u5be6\u9a57\u4e2d\uff0c\u5118\u7ba1\u8a72\u5169\u7a2e\u8a5e\u8868\u793a\u6cd5\u7684\u6458\u8981\u6210\u6548\u8f03 BM25 \u4f73\uff0c\u4f46\u7686\u4e0d\u53ca\u95dc\u806f\u6a21\u578b(RM)\u3002\u7136\u800c\u65bc SD \u5be6\u9a57\u4e2d\uff0cPV-DM \u7684\u6458\u8981\u6210\u6548\u8d85\u8d8a\u6240\u6709\u7684 \u50b3\u7d71\u6587\u4ef6\u6458\u8981\u6a21\u578b\u3002 \u8868 7. \u8a5e\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u6587\u4ef6\u76f8\u4f3c\u5ea6\u91cf\u503c\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L CBOW 0.444 0.329 0.386 0.372 0.221 0.314 SG 0.436 0.323 0.385 0.343 0.197 0.295 \u5728\u6700\u5f8c\u4e00\u7d44\u5be6\u9a57\u4e2d\uff0c\u6211\u5011\u63a2\u8a0e\u4ee5\u8a5e\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u6587\u4ef6\u76f8\u4f3c\u5ea6\u91cf\u503c(DLM)\u5c0d\u8a9e\u53e5\u9032\u884c \u9078\u53d6\uff0c\u5176\u7d50\u679c\u5c55\u793a\u65bc\u8868 7\u3002\u6211\u5011\u5c07\u7d50\u679c\u8207\u540c\u6a23\u4ee5\u8a5e\u8868\u793a\u6cd5\u7d50\u5408\u9918\u5f26\u76f8\u4f3c\u5ea6(\u8868 3)\u4ee5\u53ca\u99ac\u53ef \u592b\u96a8\u6a5f\u6f2b\u6b65\u7684\u65b9\u6cd5(\u8868 5)\u9032\u884c\u6bd4\u8f03\u3002\u5f9e TD \u5be6\u9a57\u7d50\u679c\u4e2d\u53ef\u4ee5\u89c0\u5bdf\u5230\uff0c\u6587\u4ef6\u76f8\u4f3c\u5ea6\u91cf\u503c\u5145\u5206 \u5730\u904b\u7528\u8a5e\u8868\u793a\u6cd5\u65bc\u6587\u4ef6\u6458\u8981\uff0c\u8868\u73fe\u986f\u7136\u8f03\u4f73\u3002\u6211\u5011\u4ea6\u6ce8\u610f\u5230 SG \u7684\u6458\u8981\u6210\u6548\u5e7e\u4e4e\u63a5\u8fd1 CBOW\u3002\u7136\u800c\u65bc TD \u8207 SD \u7684\u5be6\u9a57\u4e2d\uff0c\u8a72\u5169\u7a2e\u8a5e\u8868\u793a\u6cd5\u7686\u4ecd\u4e0d\u53ca RM \u7684\u6458\u8981\u6210\u6548\u3002 \u8868 8. \u8a9e\u53e5\u8868\u793a\u6cd5\u7d50\u5408\u65bc\u6587\u4ef6\u76f8\u4f3c\u5ea6\u91cf\u503c\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L PV-DM 0.480 0.375 0.430 0.384 0.240 0.333 PV-DBOW 0.433 0.323 0.384 0.364 0.236 0.321 \u540c\u6a23\u5730\uff0c\u6211\u5011\u4ee5\u8a9e\u53e5\u8868\u793a\u6cd5\u65bc\u6587\u4ef6\u76f8\u4f3c\u5ea6\u91cf\u503c\u5c0d\u8a9e\u53e5\u9032\u884c\u9078\u53d6\uff0c\u5176\u7d50\u679c\u986f\u793a\u5728\u8868 8\u3002 \u5f9e TD \u7684\u5be6\u9a57\u7d50\u679c\u4e2d\u53ef\u4ee5\u89c0\u5bdf\u5230\uff0cPV-DM \u7684\u6458\u8981\u6548\u80fd\u986f\u8457\u5730\u512a\u65bc\u8868 2 \u4e2d\u6240\u6709\u7684\u50b3\u7d71\u6587\u4ef6 \u6458\u8981\u6a21\u578b\uff0c\u4ea6\u662f\u6240\u6709\u8868\u793a\u6cd5\u4e2d\u5177\u6700\u4f73\u6458\u8981\u6548\u80fd\u4e4b\u6a21\u578b\u3002\u6211\u5011\u4ea6\u89c0\u5bdf\u5230 PV-DBOW \u8207\u8868 7 \u4e2d\u7684\u8a5e\u8868\u793a\u6cd5 SG \u6709\u8457\u76f8\u540c\u7684\u6458\u8981\u6210\u6548\u3002\u7136\u800c\u65bc SD \u4e2d\uff0c\u8a72\u5169\u7a2e\u8a9e\u53e5\u8868\u793a\u6cd5\u50c5\u9054\u5230 RM \u7684 \u6c34\u5e73\uff0c\u4f46\u7686\u4ecd\u4e0d\u53ca BM25\u3002 \u7bc0\u9304\u5f0f\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4f7f\u7528\u8868\u793a\u6cd5\u5b78\u7fd2\u6280\u8853 81 7.3 \u5229\u7528\u8072\u5b78\u7279\u5fb5\u7d50\u5408\u652f\u6301\u5411\u91cf\u6a5f\u65bc\u6587\u4ef6\u6458\u8981 \u672c\u8ad6\u6587\u6240\u4f7f\u7528\u7684\u8a9e\u97f3\u8a9e\u6599\u662f\u7d93\u7531\u4eba\u5de5\u5207\u97f3\uff0c\u4e0d\u6703\u6709\u8a9e\u97f3\u908a\u754c\u932f\u8aa4\u7684\u554f\u984c\uff0c\u50c5\u9808\u8003\u91cf\u8a9e\u97f3 \u754c\uff0c\u800c\u62bd\u53d6\u51fa\u7684\u97fb\u5f8b\u7279\u5fb5\u4ea6\u6703\u662f\u4e00\u81f4\u3002\u672c\u8ad6\u6587\u7e3d\u5171\u4f7f\u7528 12 \u7a2e\u4e0d\u540c\u7684\u6458\u8981\u7279\u5fb5\u4f5c\u70ba\u652f\u6301\u5411 \u91cf\u6a5f(Support Vector Machine, SVM)\u7684\u8f38\u5165\uff0c\u53ef\u6982\u7565\u5206\u6210\u4e09\u5927\u985e\uff0c\u5206\u5225\u70ba\u8a5e\u5f59\u7279\u5fb5(Lexical Features)\u3001\u97fb\u5f8b\u7279\u5fb5(Prosodic Features)\u4ee5\u53ca\u95dc\u806f\u7279\u5fb5(Relevance Features)\uff0c\u8a73\u7d30\u7684\u7279\u5fb5\u8cc7 \u8a0a\u5982\u8868 9 \u6240\u793a\u3002 \u8868 9. \u5be6\u9a57\u63a1\u7528\u4e4b\u5404\u5f0f\u7279\u5fb5 \u97fb\u5f8b\u7279\u5fb5(Prosodic Features) \u97f3\u9ad8(Pitch):\u6700\u5927\u3001\u6700\u5c0f\u3001\u5e73\u5747\u3001\u5dee\u503c \u80fd\u91cf(Energy):\u6700\u5927\u3001\u6700\u5c0f\u3001\u5e73\u5747\u3001\u5dee\u503c \u97f3\u6846\u9577\u5ea6(Duration):\u6700\u5927\u3001\u6700\u5c0f\u3001\u5e73\u5747\u3001\u5dee\u503c \u5171\u632f\u5cf0(Formant):\u6700\u5927\u3001\u6700\u5c0f\u3001\u5e73\u5747\u3001\u5dee\u503c \u983b\u8b5c\u5cf0\u503c(Peak):\u6700\u5927\u3001\u6700\u5c0f\u3001\u5e73\u5747\u3001\u5dee\u503c \u8a5e\u5f59\u7279\u5fb5(Lexical Features) \u5c08\u6709\u540d\u8a5e\u500b\u6578(Named Entity) \u505c\u7528\u8a5e\u500b\u6578(Stop Word) \u4e8c\u9023\u8a9e\u8a00\u6a21\u578b\u5206\u6578(Bigram) \u6b63\u898f\u5316\u4e8c\u9023\u8a9e\u8a00\u6a21\u578b\u5206\u6578(Normalized Bigram) \u95dc\u806f\u7279\u5fb5(Relevance Features) \u5411\u91cf\u7a7a\u9593\u6a21\u578b\u5206\u6578(VSM) \u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65\u5206\u6578(MRW) \u8a9e\u8a00\u6a21\u578b\u5206\u6578(LM) \u7531\u8868 10 \u4e2d\u5f97\u5230\uff0c\u7121\u8ad6\u5728\u6587\u5b57\u6587\u4ef6(TD)\u6216\u662f\u8a9e\u97f3\u6587\u4ef6(SD)\u4e2d\uff0c\u97fb\u5f8b\u7279\u5fb5(Prosodic Features)\u76f8\u5c0d\u65bc\u5176\u5b83\u5169\u7a2e\u7279\u5fb5\u7522\u751f\u8f03\u70ba\u986f\u8457\u7684\u6458\u8981\u6548\u80fd\uff0c\u56e0\u6b64\u97fb\u5f8b\u7279\u5fb5\u6bd4\u8d77\u5176\u5b83\u5169\u7a2e\u7279 \u5fb5\u66f4\u80fd\u5920\u5224\u65b7\u6458\u8981\u8a9e\u53e5\u7684\u91cd\u8981\u8cc7\u8a0a\u3002\u5728 TD \u5be6\u9a57\u4e2d\uff0c\u8a5e\u5f59\u7279\u5fb5(Lexical Features)\u5728\u9019\u4e09\u7a2e \u6458\u8981\u7279\u5fb5\u4e2d\u7684\u8868\u73fe\u6700\u5dee\uff0c\u5176\u539f\u56e0\u53ef\u80fd\u662f\u8a72\u7279\u5fb5\u63cf\u8ff0\u7684\u662f\u8868\u6dfa(Shallow)\u8a9e\u53e5\u6027\u8cea\uff0c\u5305\u542b\u5c08 \u6709\u540d\u8a5e\u7684\u6578\u91cf\u3001\u505c\u7528\u8a5e\u7684\u6578\u91cf\u4ee5\u53ca\u8a9e\u53e5\u7684\u6d41\u66a2\u6027\uff0c\u6c92\u6709\u8003\u616e\u8a9e\u53e5\u7684\u8a9e\u610f\u5167\u5bb9\uff0c\u56e0\u6b64\u55ae\u6191 \u8a72\u7279\u5fb5\u7121\u6cd5\u9078\u53d6\u51fa\u8f03\u6b63\u78ba\u7684\u6458\u8981\u8a9e\u53e5\u3002\u6b64\u5916\uff0c\u95dc\u806f\u7279\u5fb5(Relevance Features)\u6bd4\u8d77\u8a5e\u5f59\u7279\u5fb5 \u6709\u8f03\u597d\u7684\u6458\u8981\u6210\u6548\u3002\u5728 SD \u5be6\u9a57\u4e2d\u5f97\u5230\u7684\u7d50\u8ad6\uff0c\u8207 TD \u7684\u7d50\u8ad6\u5177\u4e00\u81f4\u6027\uff0c\u4f46\u95dc\u806f\u7279\u5fb5\u8207 \u97fb\u5f8b\u7279\u5fb5\u4e4b\u9593\u6548\u679c\u5dee\u7570\u8f03\u7121 TD \u4f86\u5f97\u986f\u8457\u3002 \u8868 10. \u55ae\u985e\u7279\u5fb5\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u97fb\u5f8b\u7279\u5fb5 0.452 0.349 0.409 0.363 0.219 0.322 \u65bd\u51f1\u6587 \u7b49 \u6211\u5011\u9032\u884c\u4f7f\u7528\u6240\u6709\u6458\u8981\u7279\u5fb5\u65bc\u652f\u6301\u5411\u91cf\u6a5f\u5668(Support Vector Machine, SVM)\u4e4b\u5be6\u9a57\uff0c \u5176\u7d50\u679c\u793a\u65bc\u8868 11\u3002\u5f9e\u5be6\u9a57\u7d50\u679c\u4e2d\u53ef\u4ee5\u767c\u73fe\uff0c\u7121\u8ad6\u65bc TD \u6216\u662f SD \u4e2d\uff0c\u7d93\u904e\u5404\u7a2e\u9762\u5411\u7684\u8003 \u8981\u6548\u80fd\u7684\u5f71\u97ff\u3002\u56e0\u6b64\u6211\u5011\u5c07\u95dc\u806f\u7279\u5fb5\u4e2d\u7684\u5411\u91cf\u7a7a\u9593\u6a21\u578b(VSM)\u3001\u99ac\u53ef\u592b\u96a8\u6a5f\u6f2b\u6b65(MRW) \u4ee5\u53ca\u55ae\u9023\u8a9e\u8a00\u6a21\u578b(ULM)\u7684\u5206\u6578\uff0c\u4ee5\u8a5e\u8868\u793a\u6cd5\u6a21\u578b\u6458\u8981\u4e4b\u5206\u6578\u4f5c\u70ba\u66ff\u63db\uff0c\u5206\u5225\u6839\u64da\u65bc\u8868 3\u30015 \u548c 7 \u4e2d\u6700\u4f73\u7684\u6458\u8981\u8868\u73fe\uff0c\u5f9e\u5404\u8868\u4e2d\u53ef\u4ee5\u767c\u73fe CBOW \u7684\u6458\u8981\u6548\u679c\u59cb\u7d42\u6700\u4f73\u3002 \u8868 11. \u7d50\u5408\u6240\u6709\u7279\u5fb5\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u6240\u6709\u7279\u5fb5 0.484 0.384 0.440 0.387 0.247 0.348 \u540c\u6a23\u5730\u7d50\u5408\u6240\u6709\u7279\u5fb5\u4e00\u4f75\u505a\u70ba\u652f\u6301\u5411\u91cf\u6a5f\u7684\u8f38\u5165\uff0c\u5176\u6458\u8981\u6548\u80fd\u5982\u8868 12 \u6240\u793a\u3002\u5f9e\u5be6 \u9a57\u7d50\u679c\u4e2d\u767c\u73fe\u5230\uff0c\u7121\u8ad6\u5728 TD \u6216\u662f SD \u4e2d\uff0c\u4ee5\u8a5e\u8868\u793a\u6cd5\u6a21\u578b\u4f5c\u70ba\u95dc\u806f\u7279\u5fb5\uff0c\u7686\u4f7f\u5f97\u6458\u8981 \u6210\u6548\u975e\u5e38\u986f\u8457\uff0c\u5c24\u5176\u5728 TD \u4e2d\u7684\u5be6\u9a57\u7d50\u679c\uff0c\u7522\u751f\u6700\u4f73\u4e4b\u6458\u8981\u6210\u6548\u3002 \u8868 12. \u4ee5\u8a5e\u8868\u793a\u6cd5\u6a21\u578b\u6458\u8981\u5206\u6578\u70ba\u95dc\u806f\u7279\u5fb5\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u6240\u6709\u7279\u5fb5 0.497 0.406 0.451 0.396 0.254 0.353 \u6211\u5011\u4ea6\u8003\u616e\u8a9e\u53e5\u8868\u793a\u6cd5\u6a21\u578b\u5206\u6578\u5c0d\u6458\u8981\u6548\u80fd\u7684\u5f71\u97ff\u3002\u540c\u6a23\u5c07\u95dc\u806f\u7279\u5fb5\u4e2d\u7684\u6a21\u578b\u5206\u6578 \u66ff\u63db\u70ba\u8a9e\u53e5\u8868\u793a\u6cd5\u6a21\u578b\u6458\u8981\u4e4b\u5206\u6578\uff0c\u5206\u5225\u6839\u64da\u65bc\u8868 4\u30016 \u548c 8 \u4e2d\u6700\u4f73\u7684\u6458\u8981\u8868\u73fe\uff0c\u5f9e\u5404\u8868 \u4e2d\u7684\u7d50\u679c\u53ef\u89c0\u5bdf\u5230 PV-DM \u7684\u6458\u8981\u6548\u679c\u59cb\u7d42\u6700\u4f73\uff1b\u5176\u6458\u8981\u6210\u6548\u5982\u8868 13 \u6240\u793a\u3002\u5f9e TD \u7684\u5be6 \u9a57\u7d50\u679c\u4e2d\u53ef\u4ee5\u89c0\u5bdf\u5230\uff0c\u4f7f\u7528\u8a9e\u53e5\u8868\u793a\u6cd5\u6a21\u578b\u5206\u6578\u4f5c\u70ba\u7279\u5fb5\u4e4b\u6458\u8981\u6210\u6548\u8f03\u4f7f\u7528\u8a5e\u8868\u793a\u6cd5\u4f86 \u5f97\u5dee(\u8868 12)\u3002\u7136\u800c\u5728 SD \u4e2d\uff0c\u7d50\u5408\u4ee5\u8a9e\u53e5\u8868\u793a\u6cd5\u6a21\u578b\u5206\u6578\u4f5c\u70ba\u95dc\u806f\u7279\u5fb5\u53ef\u4ee5\u9054\u5230\u6700\u4f73\u4e4b \u6458\u8981\u6548\u679c\u3002 \u8868 13. \u4ee5\u8a9e\u53e5\u8868\u793a\u6cd5\u6a21\u578b\u6458\u8981\u5206\u6578\u70ba\u95dc\u806f\u7279\u5fb5\u4e4b\u6458\u8981\u7d50\u679c \u6587\u5b57\u6587\u4ef6(TD) \u8a9e\u97f3\u6587\u4ef6(SD) \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u6240\u6709\u7279\u5fb5 0.487 0.393 0.446 0.385 0.255 0.350 8. \u7d50\u8ad6\u8207\u672a\u4f86\u5c55\u671b \u904e\u53bb\u5728\u81ea\u52d5\u6587\u4ef6\u6458\u8981\u7684\u7814\u7a76\u4e3b\u8981\u4ecd\u8457\u91cd\u65bc\u6587\u5b57\u6587\u4ef6\u6458\u8981\uff0c\u76f4\u5230 1990 \uf98e\u5f8c\u671f\uff0c\u7531\u65bc\u5f71\u97f3\u591a \u5a92\u9ad4\u6280\u8853\u7684\u9032\u6b65\u8207\u6210\u719f\uff0c\u624d\u9010\u6f38\u958b\u59cb\u6709\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u7684\u7814\u7a76\u3002\u6587\u4ef6\u6458\u8981\u53ef\u5206\u70ba\u7bc0\u9304\u5f0f\u6458 \u5f35\u5ead\u8c6a \u7b49 \u80fd\u986f\u8457\u5730\u964d\u4f4e\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7387\u3002\u6b64\u5916\uff0c\u6211\u5011\u4e5f\u5617\u8a66\u5c07\u6240\u63d0\u51fa\u7684\u6539\u9032\u65b9\u6cd5\u8207\u4e00\u4e9b \u77e5\u540d\u7684\u7279\u5fb5\u5f37\u5065\u6280\u8853\u505a\u6bd4\u8f03\u548c\u7d50\u5408\uff0c\u4ee5\u9a57\u8b49\u9019\u4e9b\u6539\u9032\u65b9\u6cd5\u4e4b\u5be6\u7528\u6027\u3002 \u7684\u65b9\u5f0f(\u8868 3)\u65bd\u51f1\u6587 \u7b49 \u65b9\u6cd5 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u8fa8\u8b58\u932f\u8aa4\u65bc\u6587\u4ef6\u6458\u8981\u7684\u5f71\u97ff\uff0c\u56e0\u6b64\u6587\u5b57\u6587\u4ef6(TD)\u8207\u8a9e\u97f3\u6587\u4ef6(SD)\u5169\u8005\u6703\u6709\u76f8\u540c\u7684\u8a9e\u97f3\u908a \u91cf\u5f8c\uff0c\u78ba\u5be6\u53ef\u4ee5\u7372\u5f97\u8f03\u597d\u7684\u6458\u8981\u6210\u6548\u3002\u63a5\u8457\u9032\u884c\u63a2\u8a0e\u95dc\u806f\u7279\u5fb5\u4e2d\u4f7f\u7528\u5176\u5b83\u6a21\u578b\u5206\u6578\u5c0d\u6458 \u95dc\u9375\u8a5e\uff1a\u8a9e\u97f3\u8fa8\u8b58\u3001\u96dc\u8a0a\u3001\u5f37\u5065\u6027\u3001\u8abf\u8b8a\u983b\u8b5c\u3001\u975e\u8ca0\u77e9\u9663\u5206\u89e3
" }, "TABREF25": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
90\u5f35\u5ead\u8c6a \u7b49
\uff0c02(1)
\u5176\u4e2d\uff0cn \u8207 k \u4f9d\u5e8f\u70ba\u97f3\u6846\u7d22\u5f15\u8207\u8abf\u8b8a\u983b\u7387\u7d22\u5f15\uff0cDFT \u70ba\u96e2\u6563\u5085\u7acb\u8449\u8f49\u63db(Discrete Fourier
Transform, DFT)\uff0cX[k]\u4ee3\u8868\u8a9e\u97f3\u7279\u5fb5\u6642\u9593\u5e8f\u5217 x[n]\u7684\u8abf\u8b8a\u983b\u8b5c\u3002\u7531\u5f0f(1)\u53ef\u770b\u51fa\u8abf\u8b8a\u983b\u8b5c
\u53ef\u4ee5\u88ab\u7528\u4f86\u5ee3\u6cdb\u5730\u5206\u6790\u8a9e\u53e5\u4e2d\u8a9e\u97f3\u7279\u5fb5\u96a8\u6642\u9593\u8b8a\u5316\u7684\u8cc7\u8a0a\u3002\u800c X[k]\u983b\u8b5c\u5e8f\u5217\u53ef\u8996\u70ba\u4e00\u7a2e
\u5c0d\u65bc\u539f\u59cb\u8a9e\u97f3\u8a0a\u865f\u4f5c\u964d\u4f4e\u53d6\u6a23(Down-Sampling)\u5f8c\u7684\u8abf\u8b8a\u8a0a\u865f(\u7531\u8a0a\u865f\u53d6\u6a23\u7387\u8f49\u81f3\u97f3\u6846\u53d6
\u6a23\u7387)\uff0c\u6b64\u5e8f\u5217\u5373\u70ba\u6240\u5c6c\u8a9e\u97f3\u7279\u5fb5\u6642\u9593\u5e8f\u5217\u4e4b\u8abf\u8b8a\u983b\u8b5c(Modulation Spectrum)\u3002\u7531\u5f0f(1)\u53ef
\u77e5\uff0c\u8abf\u8b8a\u983b\u8b5c X[k]\u4e4b\u6700\u9ad8\u983b\u7387\u8207\u7279\u5fb5\u5e8f\u5217 x[n]\u4e4b\u53d6\u6a23\u983b\u7387(\u97f3\u6846\u53d6\u6a23\u7387)\u6709\u95dc\u3002\u4f8b\u5982\uff0c\u5728\u4e00
\u822c\u8a2d\u5b9a\u4e0b\uff0c\u82e5\u97f3\u6846\u53d6\u6a23\u7387\u70ba 100 Hz\uff0c\u5247\u6700\u9ad8\u8abf\u8b8a\u983b\u7387\u70ba 50 Hz\u3002
\u904e\u53bb\u5df2\u6709\u4e0d\u5c11\u5b78\u8005\u7814\u7a76\u8a9e\u97f3\u7279\u5fb5\u4e4b\u8abf\u8b8a\u983b\u8b5c\u7684\u7279\u6027\uff0c\u767c\u73fe\u4e86\u8abf\u8b8a\u983b\u8b5c\u4e2d\u7684\u4f4e\u983b\u6210\u5206
\u662f\u6bd4\u9ad8\u983b\u6210\u5206\u9084\u8981\u91cd\u8981\u7684\u7279\u6027(Kanedera et al., 1997)\u3002\u540c\u6642\uff0c\u8abf\u8b8a\u983b\u8b5c\u4e4b\u4f4e\u983b\u6210\u5206(\u7d04 1Hz
\u81f3 16Hz)\u5c0d\u65bc\u8a9e\u97f3\u8fa8\u8b58\u6b63\u78ba\u7387\u4e5f\u6709\u5bc6\u5207\u7684\u95dc\u4fc2\uff0c\u6f5b\u85cf\u6709\u91cd\u8981\u7684\u8a9e\u610f\u8cc7\u8a0a\u3002\u5176\u4e2d\uff0c\u6700\u91cd\u8981
\u7684\u662f\u4f4d\u65bc 4 Hz \u9644\u8fd1\uff0c\u6709\u5b78\u8005\u6307\u51fa\uff0c4 Hz \u662f\u4eba\u8033\u807d\u89ba\u6700\u70ba\u654f\u611f\u4e4b\u8abf\u8b8a\u983b\u7387(Hermansky, 1998)\uff1b
\u53e6\u6709\u5b78\u8005\u4e5f\u8a8d\u70ba\uff0c4 Hz \u70ba\u4eba\u985e\u5927\u8166\u76ae\u5c64\u611f\u77e5\u4e4b\u91cd\u8981\u8abf\u8b8a\u983b\u7387(Greenberg, 1997)\u3002\u7576\u8a9e\u97f3\u8a0a
\u865f\u53d7\u5230\u96dc\u8a0a\u5f71\u97ff\u6642\uff0c\u5176\u8a9e\u97f3\u7279\u5fb5\u6642\u9593\u5e8f\u5217\u6703\u53d7\u5230\u5f71\u97ff\u800c\u5931\u771f\uff0c\u53ca\u5176\u8abf\u8b8a\u983b\u8b5c\u4e5f\u6703\u8ddf\u8457\u53d7
\u5230\u727d\u9023\u3002\u5f88\u591a\u5b78\u8005\u63d0\u51fa\u4f5c\u7528\u5728\u8abf\u8b8a\u983b\u8b5c\u7684\u6b63\u898f\u5316\u6cd5\uff0c\u4ee5\u6539\u5584\u8abf\u8b8a\u983b\u8b5c\u53d7\u5230\u96dc\u8a0a\u5e72\u64fe\u7684\u5f71
\u97ff\u3002\u56e0\u6b64\uff0c\u6211\u5011\u53ef\u5c07\u8a31\u591a\u767c\u5c55\u5728\u8a9e\u97f3\u7279\u5fb5\u6642\u9593\u5e8f\u5217\u7684\u6b63\u898f\u5316\u6cd5\u61c9\u7528\u5728\u8abf\u8b8a\u983b\u8b5c\u4f7f\u5176\u6b63\u898f
\u5316\uff1b\u800c\u6b63\u898f\u5316\u7684\u5c0d\u8c61\u662f\u5c0d\u5176\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6(Magnitude)\u6210\u5206|X[k]|\u4f86\u9032\u884c\u8655\u7406\uff0c\u4e26\u4fdd\u6301\u5176\u76f8
\u4f4d\u89d2\u4e0d\u8b8a\u03b8[k]=\u2220X[k]\u7684\u90e8\u5206\u3002\u63a5\u8457\uff0c\u7d93\u8655\u7406\u5f8c\u88ab\u66f4\u65b0\u7684\u5f37\u5ea6\u6210\u5206\u6703\u8207\u539f\u59cb\u76f8\u4f4d\u6210\u5206\u7d50\u5408\uff0c
\u518d\u85c9\u7531\u53cd\u5085\u7acb\u8449\u8f49\u63db(Inverse Discrete Fourier Transform, IDFT)\u4f86\u6c42\u5f97\u65b0\u7684\u8a9e\u97f3\u7279\u5fb5\u6642\u9593
\u5e8f\u5217\u3002\u82e5\u8abf\u8b8a\u983b\u8b5c\u7684\u5f37\u5ea6\u80fd\u5920\u88ab\u6709\u6548\u7684\u6b63\u898f\u5316\uff0c\u4fbf\u80fd\u5920\u6709\u6548\u89e3\u6c7a\u96dc\u8a0a\u7522\u751f\u7684\u74b0\u5883\u4e0d\u5339\u914d
\u554f\u984c\uff0c\u4f7f\u81ea\u52d5\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\u5728\u4f7f\u7528\u65b0\u7684\u8a9e\u97f3\u7279\u5fb5\u7684\u60c5\u6cc1\u4e0b\u80fd\u5920\u7372\u5f97\u8f03\u4f73\u7684\u8fa8\u8b58\u7387\u3002\u4ee5\u4e0b \u7b49\u3002 \u672c\u8ad6\u6587\u65e8\u5728\u63a2\u7a76\u4f7f\u7528\u975e\u8ca0\u77e9\u9663\u5206\u89e3(Nonnegative Matrix Factorization, NMF)\u4ee5\u53ca\u4e00\u4e9b \u5c07\u6703\u7c21\u55ae\u56de\u9867\u4e00\u4e9b\u5e38\u898b\u7684\u8abf\u8b8a\u983b\u8b5c\u6b63\u898f\u5316\u6cd5\u3002
\u6539\u9032\u65b9\u6cd5\u4f86\u6b63\u898f\u5316\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\uff0c\u4ee5\u7372\u5f97\u8f03\u5177\u5f37\u5065\u6027\u7684\u8a9e\u97f3\u7279\u5fb5\u3002\u9996\u5148\uff0c\u7d50\u5408\u7a00\u758f \u6027\u7684\u6982\u5ff5\uff0c\u671f\u671b\u80fd\u5920\u6c42\u53d6\u5230\u5177\u8abf\u8b8a\u983b\u8b5c\u5c40\u90e8\u6027\u7684\u8cc7\u8a0a\u4ee5\u53ca\u91cd\u758a\u8f03\u5c11\u7684 NMF \u57fa\u5e95\u5411\u91cf\u8868 2.2 \u8abf\u8b8a\u983b\u8b5c\u5e73\u5747\u6b63\u898f\u5316\u6cd5(\u5047\u8a2d\u7576\u5404\u7a2e\u97f3\u7d20\u5728\u4e00\u822c\u74b0\u5883\u4e2d\u5206\u5e03\u7684\u6bd4\u4f8b\u63a5\u8fd1\u4e00\u81f4\u6642\uff0c\u6bcf\u4e00\u7dad\u5ea6\u8a9e\u97f3\u7279\u5fb5\u7684\u8abf\u8b8a\u983b\u8b5c\u4e4b \u793a\u3002\u5176\u6b21\uff0c\u57fa\u65bc\u5c40\u90e8\u4e0d\u8b8a\u6027\u7684\u6982\u5ff5\uff0c\u5e0c\u671b\u767c\u97f3\u5167\u5bb9\u76f8\u4f3c\u7684\u8a9e\u53e5\u4e4b\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\uff0c\u5728 NMF \u7a7a\u9593\u6709\u8d8a\u76f8\u8fd1\u7684\u5411\u91cf\u8868\u793a\u4ee5\u7dad\u6301\u8a9e\u53e5\u9593\u7684\u95dc\u806f\u7a0b\u5ea6\u3002\u518d\u8005\uff0c\u5728\u6e2c\u8a66\u968e\u6bb5\u7d93\u7531\u6b63\u898f\u5316 \u5e73\u5747\u503c\u61c9\u8a72\u70ba\u4e00\u500b\u5b9a\u503c(Huang et al., 2009)\uff1a
NMF \u4e4b\u7de8\u78bc\u5411\u91cf\uff0c\u66f4\u9032\u4e00\u6b65\u63d0\u5347\u8a9e\u97f3\u7279\u5fb5\u4e4b\u5f37\u5065\u6027\u3002\u6700\u5f8c\uff0c\u6211\u5011\u4e5f\u7d50\u5408\u4e0a\u8ff0\u4e09\u7a2e NMF | | (2)
\u7684\u6539\u9032\u65b9\u6cd5\u3002\u6b64\u5916\uff0c\u4e5f\u5617\u8a66\u5c07\u6211\u5011\u6240\u63d0\u51fa\u7684\u6539\u9032\u65b9\u6cd5\u8207\u4e00\u4e9b\u73fe\u6709\u7684\u7279\u5fb5\u5f37\u5065\u6280\u8853\u505a\u6bd4\u8f03 \u5728\u5f0f(2)\u4e2d\uff0c| |\u70ba\u539f\u59cb\u7684\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\uff0c \u70ba\u55ae\u4e00\u8a9e\u53e5\u7684\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u4e4b\u5e73
\u548c\u7d50\u5408\uff0c\u4ee5\u9a57\u8b49\u9019\u4e9b\u6539\u9032\u65b9\u6cd5\u4e4b\u5be6\u7528\u6027\u3002 \u5747\u503c\uff0c \u70ba\u6240\u6709\u8a13\u7df4\u8a9e\u53e5\u7684\u8abf\u8b8a\u983b\u8b5c\u5f37\u5ea6\u6210\u5206\u4e4b\u5e73\u5747\u503c\uff0c\u800c\u4fbf\u662f\u66f4\u65b0\u904e\u5f8c\u7684\u8abf\u8b8a\u983b
2. \u8abf\u8b8a\u983b\u8b5c\u6b63\u898f\u5316\u6cd5 \u8b5c\u5f37\u5ea6\u6210\u5206\u3002
2.1 \u8abf\u8b8a\u983b\u8b5c\u4e4b\u7c21\u4ecb
\u5c0d\u65bc\u4efb\u4e00\u7279\u5b9a\u7dad\u5ea6\u8a9e\u97f3\u983b\u8b5c\u7279\u5fb5\u6240\u6210\u7684\u6642\u9593\u5e8f\u5217 x[n]\u800c\u8a00\uff0c\u5176\u8abf\u8b8a\u983b\u8b5c\u5b9a\u7fa9\u5982\u4e0b\uff1a
" }, "TABREF31": { "text": ").", "num": null, "type_str": "table", "html": null, "content": "
Keywords: Deep Learning, Stacked Autoencoders, Couple Therapy, Human
Behavior Analysis, Emotion Recognition
1. \u7dd2\u8ad6
\u4eba\u8207\u4eba\u4e4b\u9593\u4ea4\u8ac7\u4e92\u52d5\uff0c\u5e38\u900f\u904e\u8a9e\u8a00\u50b3\u9054\u5f7c\u6b64\u7684\u60f3\u6cd5\uff0c\u4e26\u5728\u9019\u4ea4\u8ac7\u904e\u7a0b\u4e2d\u5f97\u77e5\u96d9\u65b9\u7684\u884c\u70ba
\u53cd\u61c9\u3002\u5229\u7528\u4eba\u70ba\u89c0\u5bdf\u4f86\u5206\u6790\u96d9\u65b9\u884c\u70ba\u53cd\u61c9\uff0c\u9019\u90e8\u5206\u6700\u65e9\u5e38\u61c9\u7528\u5728\u5fc3\u7406\u5b78\u548c\u7cbe\u795e\u5b78\u65b9\u9762
" }, "TABREF32": { "text": "\u900f\u904e\u8a9e\u97f3\u7279\u5fb5\u5efa\u69cb\u57fa\u65bc\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668\u6f14\u7b97\u6cd5\u4e4b109\u5a5a\u59fb\u6cbb\u7642\u4e2d\u592b\u59bb\u4e92\u52d5\u884c\u70ba\u91cf\u8868\u81ea\u52d5\u5316\u8a55\u5206\u7cfb\u7d71\u672c\u8ad6\u6587\u5229\u7528 BSP \u7684\u57fa\u672c\u601d\u8def\u61c9\u7528\u5728\u5a5a\u59fb\u6cbb\u7642\u8cc7\u6599\u5eab\u4e0a\u9762(Christensen et al., 2004)\uff0c \u7a2e\u884c\u70ba\uff0c\u56e0\u70ba\u548c\u5176\u4ed6 26 \u7a2e\u884c\u70ba\u8a55\u5206\u6bd4\u8d77\u4f86\uff0c\u9019 6 \u7a2e\u6709\u8f03\u9ad8\u7684\u8a55 \u5206\u8005\u8a8d\u540c\u5ea6(Agreement)\uff0c\u8a8d\u540c\u5ea6\u7684\u8a08\u7b97\u65b9\u5f0f\u70ba\u500b\u5225\u8a55\u5206\u8005\u7684\u5206\u6578\u548c\u5176\u4ed6\u8a55\u5206\u8005\u8a55\u5206\u7684\u5e73 \u5747\u5206\u6578\u53d6\u76f8\u95dc\u4fc2\u6578(correlation)\u3002\u5176\u9918\u884c\u70ba\u7684\u8a8d\u540c\u5ea6\u4ecb\u65bc 0.4 \u548c 0.7 \u4e4b\u9593\uff0c\u7b2c\u4e94\u7ae0\u7bc0\u6703\u6bd4\u8f03 \u9019 6 \u7a2e\u884c\u70ba\u9810\u6e2c\u6e96\u78ba\u7387\u3002 \u8868 2. \u5c0d\u65bc 6 \u7a2e\u884c\u70ba\u6e96\u5247\u7684\u8a8d\u540c\u5ea6(agreement)", "num": null, "type_str": "table", "html": null, "content": "
110 112\u900f\u904e\u8a9e\u97f3\u7279\u5fb5\u5efa\u69cb\u57fa\u65bc\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668\u6f14\u7b97\u6cd5\u4e4b\u9673\u67cf\u8ed2\u8207\u674e\u7948\u5747 111 \u9673\u67cf\u8ed2\u8207\u674e\u7948\u5747
\u5a5a\u59fb\u6cbb\u7642\u4e2d\u592b\u59bb\u4e92\u52d5\u884c\u70ba\u91cf\u8868\u81ea\u52d5\u5316\u8a55\u5206\u7cfb\u7d71
\u5a5a\u59fb\u6cbb\u7642\u8cc7\u6599\u5eab\u6703\u8a73\u7d30\u8aaa\u660e\u5728\u7b2c\u4e8c\u7ae0\u3002\u9019\u500b\u8cc7\u6599\u5eab\u7d00\u9304\u4e86\u592b\u59bb\u5728\u4e00\u6bb5\u5c0d\u8a71\u4e2d\u8ac7\u8ff0\u4e86\u4ed6\u5011 \u6240\u9078\u64c7\u5a5a\u59fb\u4e2d\u7684\u554f\u984c\u3002\u8a55\u5206\u8005\u5728\u6839\u64da\u4ed6\u5011\u4e00\u6bb5\u8a71\u7684\u7a2e\u7a2e\u884c\u70ba\u6839\u64da\u4e0d\u540c\u884c\u70ba\u91cf\u8868\u9032\u884c\u8a55\u5206 (\u5e7d\u9ed8\u884c\u70ba\u3001\u60b2\u50b7\u884c\u70ba\u7b49\u7b49)\u3002 \u5ef6\u7e8c\u4e0a\u7bc7\u8ad6\u6587\u7684\u7814\u7a76\u5167\u5bb9\u4f86\u81ea\u52d5\u5316\u5206\u6790\u592b\u59bb\u4e00\u6bb5\u5c0d\u8a71\u7684\u884c\u70ba\u5206\u6578(Black et al., 2013)\uff0c \u4e00\u6bb5\u8a9e\u97f3\u7d93\u904e\u9810\u8655\u7406\uff0c\u4e4b\u5f8c\u4f5c\u8072\u97f3\u7279\u5fb5\u64f7\u53d6(acoustic feature extraction)\uff0c\u518d\u4f7f\u7528\u6a5f\u5668\u5b78\u7fd2 \u4f86\u4f5c\u5206\u985e\u8fa8\u8b58\uff0c\u5f97\u5230\u6700\u5f8c\u7684\u6e96\u78ba\u7387\u3002\u5176\u4e2d\uff0c\u7279\u5fb5\u64f7\u53d6\u548c\u6a5f\u5668\u5b78\u7fd2\u7684\u7b97\u6cd5\u90fd\u6703\u5f71\u97ff\u6700\u5f8c\u7684 \u6e96\u78ba\u7387\uff0c\u601d\u8003\u5982\u4f55\u6539\u9032\u9019\u4e9b\u5f71\u97ff\u56e0\u7d20\uff0c\u5c0d\u6574\u9ad4\u6e96\u78ba\u7387\u7684\u63d0\u5347\u662f\u4e00\u5927\u91cd\u8981\u7684\u8ab2\u984c\uff0c\u4e5f\u662f\u6211 \u7814\u7a76\u5718\u968a\u3002\u900f\u904e\u9019 10 \u5206\u9418\u7684\u5c0d\u8a71\u8b93\u592b\u59bb\u5f7c\u6b64\u4e86\u89e3\u96d9\u65b9\u4e4b\u9593\u7684\u554f\u984c\u4e26\u4e14\u8a66\u5716\u89e3\u6c7a\u7576\u524d\u554f \u984c\u3002 \u6bcf\u5c0d\u592b\u59bb\u7686\u6703\u9032\u884c\u4e09\u500b\u4e0d\u540c\u968e\u6bb5\u7684\u5c0d\u8a71\uff0c\u6cbb\u7642\u524d\u3001\u6cbb\u7642\u4e2d\u548c\u6cbb\u7642\u5169\u5e74\u5f8c\u3002\u900f\u904e\u9019\u4e09 \u500b\u6642\u9593\u9ede\u5c0d\u8a71\uff0c\u518d\u7d93\u7531\u591a\u4f4d\u6709\u5c08\u696d\u80cc\u666f\u7684\u8a55\u5206\u8005\u7d93\u7531\u5169\u500b\u884c\u70ba\u8a55\u5206\u91cf\u8868\uff0c\u57fa\u65bc\u793e\u4ea4\u4e92\u52d5 \u884c\u70ba\u8a55\u5206\u7cfb\u7d71(Social Support Interaction Rating System, SSIRS) (Jones & Christensen, 1998) \u548c\u57fa\u65bc\u592b\u59bb\u4e92\u52d5\u884c\u70ba\u8a55\u5206\u7cfb\u7d71(Couples Interaction Rating System, CIRS) (Heavey et al., \u5f9e\u5716 1\uff0c\u8f38\u5165\u503c \uff0c 1,2, \u2026 , \uff0c \u2208 \uff0c\u96b1\u85cf\u5c64(hidden layer)\u4e2d\u7684 \uff0c 1,2, \u2026 , \uff0c \u2208 \uff0c\u6b0a\u91cd\u77e9\u9663(weight matrix) \u2208 \uff0c\u504f\u79fb\u5411\u91cf(bias vector) \u2208 \u3002\u7531\u9019\u4e9b\u56e0 \u5b50(factor)\u69cb\u6210\u6fc0\u6d3b\u51fd\u6578(activation function)\uff0c\u5982\u5f0f(1)\u3002 (1) \u5176\u4e2d 1/ 1 \u70ba sigmoid function \u3002\u8f38\u51fa\u503c \uff0c 1,2, \u2026 , , \u2208 \uff0c \u8868 2\u3002\u4e4b\u6240\u4ee5\u6703\u9078\u64c7\u9019 6 Code Agreement \u6b0a\u91cd\u77e9\u9663 \u2208 \uff0c\u504f\u79fb\u5411\u91cf \u2208 \uff0c\u81ea\u7de8\u78bc\u5668\u8f38\u51fa\u70ba\u5f0f(2): 2002)\u9032\u884c\u8a55\u5206\uff0c\u4f9d\u64da\u8a55\u5206\u7d50\u679c\u4f86\u4e86\u89e3\u6cbb\u7642\u7684\u6210\u6548\u3002SSIRS \u4e3b\u8981\u5305\u542b 19 \u7a2e\u884c\u70ba\u6e96\u5247\u5728\u56db \u500b\u793e\u4ea4\u4e92\u52d5\u5206\u985e\u88e1\uff0c\u60c5\u611f(affectivity)\u3001\u5c48\u5f9e\u670d\u5f9e(dominance/submission)\u3001\u4e92\u52d5\u8868\u73fe\u884c\u70ba Acceptance of other (acc) 0.751 (2)
\u5011\u63d0\u51fa\u9019\u7bc7\u8ad6\u6587\u7684\u56e0\u7d20\u4e4b\u4e00\u3002 (feature of interaction)\u548c\u4e3b\u984c\u8a55\u50f9(topic definition)\u4f86\u4f5c\u70ba\u8a55\u5206\u7684\u5167\u5bb9\uff0cCIRS \u4e3b\u8981\u5305\u542b 13 Blame (bla) 0.788 \u70ba \u4e86 \u8981 \u6c42 \u5f97 \u6b0a \u91cd \u77e9 \u9663 \u548c \uff0c \u504f \u79fb \u5411 \u91cf \u548c \uff0c \u5047 \u8a2d \u4e00 \u500b \u6a23 \u672c \u96c6 \u70ba
\u5728\u7279\u5fb5\u64f7\u53d6\u65b9\u9762\uff0c\u6211\u5011\u6cbf\u7528\u4e09\u7a2e\u4f4e\u968e\u8a9e\u97f3\u7279\u5fb5(Low Level Descriptors, LLDs)\uff0c\u8a9e\u97fb \u7a2e\u884c\u70ba\u6e96\u5247\u95dc\u65bc\u592b\u59bb\u4e92\u52d5\u554f\u984c\u89e3\u6c7a\u65b9\u9762\uff0c\u5982\u8868 1\u3002 Global positive affect (pos) 0.740 , , , \u2026 , \uff0c\u6709 m \u7d44\u6a23\u672c\uff0c \u70ba\u6a23\u672c\u8f38\u5165\u7279\u5fb5\u503c\uff0c \u70ba\u5c0d\u61c9\u6a19\u7c64\u503c\uff0c\u5229
(prosodic) LLDs\u3001\u983b\u8b5c(spectrum) LLDs \u548c\u97f3\u8cea(voice quality) LLDs\u3002\u5207\u5272\u4e09\u7a2e\u8aaa\u8a71\u8005\u8aaa \u8868 1. 32 \u7a2e\u4eba\u985e\u884c\u70ba\u6e96\u5247\u5305\u542b\u5728\u5169\u7a2e\u884c\u70ba\u91cf\u8868 SSIRS \u548c CIRS Global negative affect (neg) 0.798 \u7528\u4ee3\u50f9\u51fd\u6578(cost function)\uff0c\u5982\u5f0f(3)\u3002
\u8a71\u5340\u9593(speaker domain)\uff0c\u4e08\u592b\u8aaa\u8a71\u5340\u9593\u3001\u592a\u592a\u8aaa\u8a71\u5340\u9593\u3001\u548c\u4e0d\u5206\u4eba\u8aaa\u8a71\u5340\u9593\u3002\u518d\u4f86\u5c0d\u61c9 \u5404\u5340\u9593\u63d0\u53d6 20%\u8a9e\u53e5\uff0c\u7d93\u904e 7 \u7a2e\u7d71\u8a08\u51fd\u6578(functionals)\uff0c\u7522\u751f 2940 \u7a2e\u7279\u5fb5\u503c\u3002\u6700\u5f8c\u6211\u5011 use of humor\u3001sadness\u3001anger/frustration\u3001 \u5229\u7528\u975e\u76e3\u7763\u6df1\u5ea6\u5b78\u7fd2\u7684\u505a\u6cd5\u4f86\u964d\u7dad\u627e\u51fa\u76f8\u5c0d\u95dc\u9375\u7684\u4e3b\u8981\u7279\u5fb5\u503c\u8868\u73fe\u3002 Manual Codes Global positive affect\u3001global negative affect Sadness (sad) 0.722 Use of humor (hum) 0.755 1 1 , 2 2 ,
\u6df1\u5ea6\u5b78\u7fd2\u5728\u6a5f\u5668\u5b78\u7fd2\u9818\u57df\u88e1\u9762\u662f\u6700\u8fd1\u71b1\u9580\u7684\u8a71\u984c (Hinton, 2006)\u3002\u6df1\u5ea6\u5b78\u7fd2\u53ef\u770b\u6210\u662f belligerence/domineering\u3001contempt/disgust\u3001 3. \u7814\u7a76\u65b9\u6cd5
\u4e00\u7a2e\u8cc7\u8a0a\u7684\u8868\u9054\u65b9\u5f0f\uff0c\u5229\u7528\u591a\u5c64\u795e\u7d93\u7db2\u7d61\uff0c\u7b2c\u4e00\u5c64\u8f38\u5165\u7684\u6578\u64da\u5b78\u7fd2\u4e4b\u5f8c\uff0c\u7522\u751f\u65b0\u7684\u7d44\u5408 \u8f38\u51fa\uff0c\u8f38\u51fa\u503c\u70ba\u7b2c\u4e8c\u5c64\u7684\u8f38\u5165\u503c\uff0c\u518d\u7d93\u7531\u5b78\u7fd2\u7522\u751f\u65b0\u7684\u8f38\u51fa\u503c\uff0c\u4f9d\u6b64\u985e\u63a8\u91cd\u8986\u628a\u6bcf\u5c64\u7684 \u8cc7\u8a0a\u5806\u758a\u4e0b\u53bb\uff0c\u900f\u904e\u9019\u6a23\u591a\u5c64\u5b78\u7fd2\uff0c\u53ef\u4ee5\u5f97\u5230\u5c0d\u4e00\u500b\u76ee\u6a19\u503c\u597d\u7684\u7279\u5fb5\u8868\u793a\uff0c\u76f8\u5c0d\u6e96\u78ba\u7387 SSIRS (Social Support Interaction Rating System) tension/anxiety \u3001 defensiveness \u3001 affection \u3001 \u5728\u672c\u7bc0, \u6211\u5011\u9996\u5148\u7c21\u55ae\u7684\u4ecb\u7d39\u81ea\u7de8\u78bc\u5668(Autoencoder)\u548c\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc(Stacked Sparse satisfaction \u3001 solicits partner suggestions \u3001 instrumental support offered \u3001 emotional Autoencoder, SSAE)\u57fa\u672c\u67b6\u69cb\u4ee5\u53ca\u672c\u7bc7\u8ad6\u6587\u7528\u5230\u7684\u6f14\u7b97\u6cd5\u3002
\u5c31\u80fd\u6709\u6240\u63d0\u5347\u3002\u81f3\u4eca\u5b58\u5728\u591a\u7a2e\u6df1\u5ea6\u5b78\u7fd2\u6846\u67b6\u5982\u6df1\u5ea6\u795e\u7d93\u7db2\u8def(DNN)\u3001\u6df1\u5ea6\u4fe1\u5ff5\u7db2\u8def(DBN) \u548c\u5377\u7a4d\u795e\u7d93\u7db2\u8def(CNN)\u5df2\u88ab\u61c9\u7528\u5728\u8a9e\u97f3 (Hinton et al., 2012)\u3001\u5f71\u50cf\u8fa8\u8b58 (Smirnov et al., 2014)\u548c\u624b\u5beb\u8b58\u5225 (Perwej & Chaturvedi, 2011)\u7b49\u7b49\u3002 \u6211\u5011\u5229\u7528\u6df1\u5ea6\u5b78\u7fd2\u4e2d\u7684\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668(stacked sparse autoencoder, SSAE) \uff0c\u964d\u4f4e \u7279\u5fb5\u503c\u7dad\u5ea6\uff0c\u63d0\u5347\u7279\u5fb5\u503c\u6574\u9ad4\u76f8\u95dc\u6027\uff0c\u6700\u5f8c\u5229\u7528\u7c21\u55ae LR \u8fa8\u8b58\u884c\u70ba\u5206\u6578\u9ad8\u4f4e\u3002\u6b64\u521d\u671f\u7814 support offered\u3001submissive or dominant \u3001topic 3.1 \u81ea\u7de8\u78bc\u5668(Autoencoder) a relationship issue\u3001topic a personal issue\u3001 \u6df1\u5ea6\u5b78\u7fd2\u4e2d\u81ea\u7de8\u78bc\u5668\u5229\u7528\u975e\u76e3\u7763\u5b78\u7fd2\u65b9\u5f0f (Rubanov, 2000)\uff0c\u76ee\u6a19\u5f9e\u9ad8\u7dad\u5ea6\u7684\u8f38\u5165\u7279\u5fb5\u503c discussion about husband\u3001discussion about \u5b78\u7fd2\u5230\u66f4\u5177\u4ee3\u8868\u6027\u7684\u7279\u5fb5\u503c\uff0c\u6700\u5f8c\u900f\u904e\u89e3\u78bc\u8b93\u8f38\u51fa\u503c\u7b49\u65bc\u8f38\u5165\u503c\uff0c\u57fa\u672c\u7684\u81ea\u7de8\u78bc\u5668\u67b6\u69cb wife Acceptance of other\u3001blame\u3001responsibility for \u5982\u5716 1\u3002
\u7a76\u7d50\u679c\u986f\u793a\u6574\u9ad4\u884c\u70ba\u5e73\u5747\u6e96\u78ba\u7387 75%\u8f03\u4e4b\u524d\u7814\u7a76\u4f7f\u7528 40479 \u7dad\u7279\u5fb5\u503c\u7d50\u5408\u652f\u6301\u5411\u91cf\u5668 self \u3001solicits partner perspective\u3001states external
(support vector machine) (Black et al., 2013)\u63d0\u5347\u4e86 0.9%\u3002 CIRS origins\u3001discussion\u3001clearly defines problem\u3001
\u4ee5\u4e0b\u7c21\u8ff0\u5404\u7ae0\u7bc0\u7684\u5167\u5bb9\u3002\u7b2c\u4e8c\u7ae0\u4ecb\u7d39\u672c\u7bc7\u8ad6\u6587\u6240\u4f7f\u7528\u7684\u8cc7\u6599\u5eab(database)\uff0c\u7b2c\u4e09\u7ae0\u4ecb (Couples Interaction Rating System) offers solutions \u3001 negotiates \u3001 make
\u7d39\u6211\u5011\u4f7f\u7528\u7684 SSAE \u67b6\u69cb\u548c\u5176\u6f14\u7b97\u6cd5\uff0c\u7b2c\u56db\u7ae0\u4ecb\u7d39\u6211\u5011\u63d0\u51fa\u7684\u7cfb\u7d71\u67b6\u69cb\u548c\u7814\u7a76\u7d50\u679c\uff0c\u7b2c agreements \u3001 pressures for change \u3001
\u4e94\u7ae0\u7bc0\u70ba\u7d50\u8ad6\u3002withdraws \u3001avoidance
2. \u5a5a\u59fb\u6cbb\u7642\u8cc7\u6599\u5eab \u7e3d\u5171 32 \u500b\u884c\u70ba\u6e96\u5247\uff0c\u6bcf\u500b\u884c\u70ba\u8a55\u5206\u5340\u9593\u70ba 1 \u5230 9 \u5206\u3002\u540c\u4e00\u5c0d\u8a71\u4e2d\uff0c\u4e08\u592b\u8207\u59bb\u5b50\u6703\u5404\u5225\u88ab
\u70ba\u4e86\u6e2c\u8a66\u6211\u5011\u63d0\u51fa\u65b9\u6cd5\u7684\u6e96\u78ba\u7387\uff0c\u6211\u5011\u4f7f\u7528\u548c\u4e4b\u524d\u8ad6\u6587\u76f8\u540c\u7684\u5a5a\u59fb\u6cbb\u7642\u8cc7\u6599\u5eab(couple \u8a55\u5206\u30021 \u70ba\u5c0d\u9019\u9805\u884c\u70ba\u6240\u8868\u73fe\u7684\u7a0b\u5ea6\u6700\u4f4e\uff0c9 \u70ba\u5c0d\u9019\u9805\u884c\u70ba\u6240\u8868\u73fe\u7684\u7a0b\u5ea6\u6700\u9ad8\u3002\u8a55\u5206\u8005\u70ba
therapy database)\u3002\u4ee5\u4e0b\u7c21\u55ae\u7684\u4ecb\u7d39\u7684\u8cc7\u6599\u5eab\u76f8\u95dc\u5167\u5bb9\uff1a\u6b64\u8cc7\u6599\u5eab\u7684\u6536\u96c6\u662f\u57fa\u65bc\u7814\u7a76\u7d9c\u5408 3 \u5230 4 \u500b\uff0c\u900f\u904e\u89c0\u5bdf\u592b\u59bb 10 \u5206\u9418\u7684\u5f71\u7247\u4f86\u5404\u5225\u5c0d 32 \u500b\u884c\u70ba\u9032\u884c\u8a55\u5206\u3002\u6700\u5f8c\u7e3d\u5171\u6709 569
\u884c\u70ba\u592b\u5a66\u6cbb\u7642(integrative behavioral couple therapy, IBCT)\u6210\u6548 (Christensen et al., 1995)\u3002 \u500b 10 \u5206\u9418\u7684\u6703\u8a71\uff0c117 \u5c0d\u592b\u59bb\u5728\u9019\u500b\u5a5a\u59fb\u6cbb\u7642\u5eab\u88e1\u3002
\u8cc7\u6599\u5167\u5bb9\u91dd\u5c0d 134 \u5c0d\u592b\u59bb\uff0c\u6bcf\u5c0d\u90fd\u9577\u671f\u60a3\u6709\u5a5a\u59fb\u7684\u554f\u984c\uff0c\u5982\u592b\u59bb\u76f8\u8655\u4e0d\u878d\u6d3d\u6216\u662f\u722d\u57f7\u3002 \u672c \u7bc7 \u8ad6 \u6587 \u5ef6 \u7e8c \u4e0a \u4e00 \u7bc7 \u8ad6 \u6587 \u6240 \u4f7f \u7528 \u7684 6 \u7a2e \u884c \u70ba \u4f86 \u4e0b \u53bb \u4f5c \u5206 \u6790 \uff0c \u5305 \u542b \u8a8d \u540c \u5c0d \u65b9
\u6cbb\u7642\u5167\u5bb9\u70ba\u6bcf\u5c0d\u592b\u59bb\u63a5\u53d7\u70ba\u671f\u4e00\u5e74\u7684\u6cbb\u7642\uff0c\u7814\u7a76\u5718\u968a\u518d\u8b93\u6bcf\u5c0d\u592b\u59bb\u7531\u592a\u592a\u548c\u4e08\u592b\u5404 (Acceptance of other)\u3001\u8cac\u5099\u884c\u70ba(Blame)\u3001\u592b\u59bb\u4e4b\u9593\u6b63\u9762\u7684\u4e92\u52d5(Global positive affect)\u3001\u592b
\u5225\u9078\u64c7\u4e00\u500b\u76ee\u524d\u5b58\u5728\u56b4\u91cd\u5a5a\u59fb\u554f\u984c\u7684\u984c\u76ee\u4f86\u4f5c\u70ba\u4e00\u6bb5 10 \u5206\u9418\u5c0d\u8a71\uff0c\u5c0d\u8a71\u4e2d\u6c92\u6709\u6cbb\u7642\u5e2b\u548c \u59bb\u4e4b\u9593\u8ca0\u9762\u7684\u4e92\u52d5(Global positive affect)\u3001\u60b2\u50b7\u884c\u70ba(sadness)\u3001\u5e7d\u9ed8\u8868\u73fe\u884c\u70ba(humor)\uff0c\u5982 \u5716 1. \u81ea\u7de8\u78bc\u5668
" }, "TABREF33": { "text": "\u4e2d\u7b2c\u4e00\u9805\u70ba\u5747\u65b9\u5dee\u9805( sum-of-squares error term)\uff0c\u7b2c\u4e8c\u9805\u70ba\u898f\u5247\u9805(regularization term)\uff0c\u5176\u4e2d\u03bb\u70ba\u6b0a\u91cd\u8870\u6e1b\u53c3\u6578(weight decay parameter)\uff0cn \u70ba\u81ea\u7de8\u78bc\u5668\u5c64\u6578\uff0c \u70ba\u7b2c l \u5c64 \u7bc0\u9ede\u6578\uff0c\u9019\u9805\u662f\u70ba\u4e86\u907f\u514d\u8a13\u7df4\u904e\u7a0b\u767c\u751f\u904e\u64ec\u5408(overfitting)\uff0c\u4e4b\u5f8c\u6211\u5011\u5229\u7528\u53cd\u5411\u50b3\u5c0e \u5be6\u9a57\u67b6\u69cb \u6211\u5011\u4f7f\u7528 3 \u5c64\u96b1\u85cf\u5c64\u7684 SSAE \u4f5c\u70ba\u975e\u76e3\u7763\u5b78\u7fd2\u7684\u67b6\u69cb\uff0c\u4f86\u5f9e\u4f4e\u5c64\u7d1a\u7279\u5fb5(low level feature) \u8a13\u7df4\u6210\u9ad8\u5c64\u7d1a\u7279\u5fb5(high level feature)\uff0c\u7136\u5f8c\u7528 LR \u4f86\u76e3\u7763\u5b78\u7fd2\u4f5c\u8fa8\u8b58\uff0c\u672c\u5be6\u9a57\u7b2c\u4e00\u5c64\u7a00\u758f \u81ea\u7de8\u78bc\u5668\u67b6\u69cb\u5982\u5716 3\u3002 \u6240\u5217\u7684 7 \u7a2e functionals \u8655\u7406\u904e\u5f8c\uff0c\u7522\u751f\u6700\u5f8c 2940 \u500b\u7279\u5fb5\u503c\u3002\u5728\u8f38\u5165 SSAE \u4ee5\u524d\uff0c\u6211\u5011\u628a\u9019\u4e9b\u7279\u5fb5 \u503c\u6b63\u898f\u5316\u5728 0 \u548c 1 \u7684\u5340\u9593\u3002\u8a73\u7d30\u7684\u7279\u5fb5\u503c\u5167\u5bb9\u53ef\u53c3\u8003 (Black et al., 2013) \u3002 \u8cc7\u6599 \u7531\u539f\u672c\u8cc7\u6599\u5eab 569 \u7b46\u5c0d\u8a71\u3001117 \u5c0d\u592b\u59bb\uff0c\u7d93\u7531\u4e0a\u7bc7\u8ad6\u6587\u9810\u8655\u7406\u904e\u5f8c(Black et al., 2013)\uff0c\u7522 \u751f\u6700\u5f8c\u7684 372 \u7b46\u5c0d\u8a71\u3001104 \u5c0d\u592b\u59bb\u3002\u5728 372 \u7b46\u5c0d\u8a71\u88e1\u9762\u4e08\u592b\u548c\u592a\u592a\u90fd\u6703\u88ab\u8a55\u5206\u5230\uff0c\u5c0d\u61c9 \u5728 6 \u7a2e\u884c\u70ba\u6e96\u5247\uff0c\u6211\u5011\u9078\u64c7\u524d 20%\u7684\u5206\u6578\u548c\u5f8c 20%\u7684\u5206\u6578\u7684\u5c0d\u8a71\u7576\u4f5c\u5be6\u9a57\u7684\u8fa8\u8b58\uff0c\u5171 140 \u7b46\u5c0d\u8a71\uff1a\u5169\u7a2e\u6a19\u7c64\u503c 0 \u548c 1\uff0c1 \u70ba\u5c0d\u61c9\u5230\u9ad8\u5206\uff0c0 \u70ba\u5c0d\u61c9\u5230\u4f4e\u5206\u3002\u800c\u5728\u9019\u4e9b\u53d6\u51fa\u4f86\u88ab\u9810\u6e2c \u7684\u5c0d\u8a71\u88e1\uff0c\u592b\u59bb\u6578\u4ecb\u65bc 68 \u5230 77 \u5c0d\uff0c\u5229\u7528\u9019\u4e9b\u884c\u70ba\u5c0d\u61c9\u5230\u592b\u59bb\u5c0d\u6578\u4f86\u4f5c\u4ea4\u53c9\u9a57\u8b49\uff0c1 \u5c0d \u592b\u59bb\u4f5c\u9a57\u8b49\uff0c\u5176\u9918\u5c0d\u6578\u4f5c\u8a13\u7df4\uff0c\u91cd\u8907\u5faa\u74b0 6 \u7a2e\u884c\u70ba\u5c0d\u61c9\u5230\u7684\u592b\u59bb\u5c0d\u6578\u4f86\u4f5c\u9a57\u8b49\u3002 4.3 \u5be6\u9a57\u8a2d\u5b9a \u5728\u9019\u5be6\u9a57\u88e1\uff0c\u6211\u5011\u7528 SSAE \u4f86\u4f5c\u70ba\u975e\u76e3\u7763\u5b78\u7fd2\uff0cLR \u4f86\u76e3\u7763\u5b78\u7fd2\u9810\u6e2c\uff0c\u7559\u4e00\u5c0d\u592b\u59bb\u6cd5\u5247 (leave-one-couple-out)\u7684\u65b9\u5f0f\u4f86\u4f5c\u4ea4\u53c9\u9a57\u8b49\u3002\u4e00\u958b\u59cb\u5148\u7528\u8caa\u5a6a\u8a13\u7df4\u7b97\u6cd5(greedy layerwise) \u9010\u5c64\u9810\u5b78\u7fd2(pre-training)\uff0c\u8a13\u7df4\u5b8c\u53c3\u6578\u521d\u59cb\u503c\u8f38\u5165\u81f3 SSAE\uff0cSSAE \u6709\u4e94\u500b\u56e0\u5b50\u6703\u5f71\u97ff\u6700 \u5f8c\u7684\u8868\u73fe\uff0c\u5206\u5225\u662f\u96b1\u85cf\u5c64\u7bc0\u9ede(hidden units)\u3001\u8a08\u7b97\u640d\u5931\u51fd\u6578(cost function)\u7684\u758a\u4ee3\u6b21\u6578\u548c\u4e09 \u500b\u8d85\u53c3\u6578(hyper-parameters)\u70ba\u03bb\u3001\u03c1\u3001\u03b2\uff0c\u03bb\u70ba\u6b0a\u91cd\u8870\u6e1b\u53c3\u6578(weight decay parameter)\uff0c\u03c1\u70ba\u7a00 \u758f\u53c3\u6578(sparsity parameter)\uff0c\u03b2\u70ba\u63a7\u5236\u7a00\u758f\u9805(sparsity term)\u7684\u53c3\u6578\uff0c\u9019\u4e9b\u53c3\u6578\u5728\u7b2c\u4e09\u7ae0\u6709\u4ecb \u7d39\u904e\u3002\u6211\u5011\u5148\u7528 1 \u5c64\u96b1\u85cf\u5c64\u4f86\u6e2c\u8a66\u6e96\u78ba\u7387\uff0c\u5982\u8868 4\u3002\u900f\u904e\u6539\u8b8a\u4e0d\u540c\u7684\u96b1\u85cf\u5c64\u7bc0\u9ede\u6578\uff0c\u6839 \u64da\u6e96\u78ba\u7387\u4f86\u6c7a\u5b9a\u6211\u5011\u4e0b\u4e00\u5c64\u6240\u4f7f\u7528\u7684\u96b1\u85cf\u5c64\u7bc0\u9ede\u6578\u3002 \u5982\u8868 4 \u53ef\u5f97\u77e5\uff0c\u96b1\u85cf\u5c64\u6578\u76ee\u70ba 300 \u7684\u6642\u5019\uff0c\u4e08\u592b\u548c\u592a\u592a\u88ab\u8a55\u5206\u7684 6 \u7a2e\u884c\u70ba\u5e73\u5747\u6e96\u78ba \u7387\u70ba\u6700\u9ad8\uff0c\u4f7f\u7528\u7684\u758a\u4ee3\u6b21\u6578\u70ba 15 \u6b21\uff0c\u03c1 0.1\uff0c\u03bb 0.002\uff0c\u03b2 2\u3002 \u63a5\u4e0b\u4f86\u6e2c\u8a66\u4e8c\u5c64\u96b1\u85cf\u5c64\u7684\u6e96\u78ba\u7387\uff0c\u7b2c\u4e00\u5c64\u96b1\u85cf\u6578\u5df2\u7d93\u6c7a\u5b9a\u597d\u4e86\uff0c\u6211\u5011\u6e2c\u8a66\u7684\u7b2c\u4e8c\u5c64 \u96b1\u85cf\u5c64\u7bc0\u9ede\u6578\uff0c\u5982\u8868 5\u3002\u5f9e\u8868\u4e2d\u5f97\u77e5\uff0c\u7b2c\u4e8c\u5c64\u96b1\u85cf\u5c64\u7684\u7bc0\u9ede\u6578\u70ba 200 \u7684\u6642\u5019\uff0c\u6e96\u78ba\u7387\u70ba \u6700\u9ad8\uff0c\u4f7f\u7528\u7684\u758a\u4ee3\u6b21\u6578\u70ba 15 \u6b21\uff0c\u03c1 0.1\uff0c\u03bb 0.0001\uff0c\u03b2 1\u3002 \u9673\u67cf\u8ed2\u8207\u674e\u7948\u5747 \u8868 4. 1st hidden unit \u5206\u6790\u4e08\u592b\u548c\u592a\u592a\u5c0d\u61c9\u5230 6 \u7a2e\u884c\u70ba\u7684\u6e96\u78ba\u7387\uff0c\u7c97\u9ad4\u5b57\u70ba\u8f03\u9ad8\u7684 \u6e96\u78ba", "num": null, "type_str": "table", "html": null, "content": "
114\u900f\u904e\u8a9e\u97f3\u7279\u5fb5\u5efa\u69cb\u57fa\u65bc\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668\u6f14\u7b97\u6cd5\u4e4b \u900f\u904e\u8a9e\u97f3\u7279\u5fb5\u5efa\u69cb\u57fa\u65bc\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668\u6f14\u7b97\u6cd5\u4e4b113 \u9673\u67cf\u8ed2\u8207\u674e\u7948\u5747 115
\u5a5a\u59fb\u6cbb\u7642\u4e2d\u592b\u59bb\u4e92\u52d5\u884c\u70ba\u91cf\u8868\u81ea\u52d5\u5316\u8a55\u5206\u7cfb\u7d71 \u5a5a\u59fb\u6cbb\u7642\u4e2d\u592b\u59bb\u4e92\u52d5\u884c\u70ba\u91cf\u8868\u81ea\u52d5\u5316\u8a55\u5206\u7cfb\u7d71
\u4e00\u6bb5\u8a9e\u97f3\u7d93\u904e\u9810\u8655\u7406\uff0c\u964d\u4f4e\u96dc\u8a0a\u5f71\u97ff\uff0c\u624d\u4e0d\u6703\u5f71\u97ff\u4e4b\u5f8c\u7684\u7279\u5fb5\u64f7\u53d6\uff0c\u800c\u9019\u90e8\u5206\u9810\u8655\u7406\u5728 \u8868 3. 28 \u7a2e\u7279\u5fb5\u503c\u548c 7 \u7a2e functionals
\u4e0a\u7bc7\u8ad6\u6587\u5df2\u7d93\u88ab\u8655\u7406\u904e\u4e86 (Black et al., 2013)\u3002\u672c\u7bc7\u8ad6\u6587\u6539\u8b8a\u7279\u5fb5\u64f7\u53d6\u65b9\u6cd5\uff0c\u9019\u90e8\u5206\u4e0b\u4e00 LLDs Functionals
(back-propagation)\u6f14\u7b97\u6cd5\u548c L-BFGS \u512a\u5316\u7b97\u6cd5 (Andrew & Gao, 2007)\uff0c\u91cd\u8907\u758a\u4ee3\u6e1b\u5c0f , \u503c\uff0c\u6700\u5f8c\u5f97\u5230 \u548c \u3002 \u800c\u70ba\u4e86\u8b93\u8f38\u5165\u7279\u5fb5\u503c\u66f4\u6709\u6548\u7684\u6b78\u985e\u7fa4\u96c6\u4e26\u4e14\u4e0d\u540c\u7279\u5fb5\u4e4b\u9593\u7684\u5340\u9694\u660e\u986f\uff0c , \u52a0 \u5165\u7a00\u758f\u9805(sparsity term)\u5982\u5f0f(4)\uff0c\u53d6\u540d\u70ba\u7a00\u758f\u7de8\u78bc\u5668(sparse autoencoder) (Obst, 2014)\u3002 , , || (4) \u5176 \u4e2d log 1 log \uff0c \u70ba \u7a00 \u758f \u53c3 \u6578 (sparsity parameter) \uff0c \u2211 \uff0c \u70ba\u63a7\u5236\u7a00\u758f\u9805(sparsity term)\u7684\u53c3\u6578\uff0cq \u70ba\u96b1\u85cf\u5c64\u7684\u7bc0\u9ede\u6578\u3002 3.2 \u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668(Stacked Sparse Autoencoder) \u7531\u591a\u500b\u7a00\u758f\u81ea\u7de8\u78bc\u5668\u9010\u5c64\u8a13\u7df4\u5f8c\uff0c\u5806\u758a\u7d44\u6210\u7684\u67b6\u69cb\u70ba\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668(Stacked Sparse Autoencoder)\uff0c\u5982\u5716 2\uff0c\u6bcf\u4e00\u5c64\u7684\u7de8\u78bc\u5f8c\u8f38\u51fa\u70ba\u4e0b\u4e00\u5c64\u7684\u8f38\u5165\u3002\u5f9e\u5716 2 \u53ef\u770b\u51fa\uff0c\u8f38\u5165\u5c64(Input layer)\u7d93\u7531\u7b2c\u4e00\u500b\u7a00\u758f\u81ea\u7de8\u78bc\u5668\u8a13\u7df4\u5b8c\u4e4b\u5f8c\u5f97\u5230\u7b2c\u4e00\u96b1\u85cf\u5c64(Hidden layer1)\u7684 n \u500b\u7bc0\u9ede\uff0c \u7531\u9019 n \u500b\u7bc0\u9ede\u5728\u7d93\u904e\u7b2c\u4e8c\u500b\u7a00\u758f\u81ea\u7de8\u78bc\u5668\u8a13\u7df4\u5f97\u5230\u7b2c\u4e8c\u96b1\u85cf\u5c64(Hidden layer2)\u7684 p \u500b\u7bc0\u9ede\uff0c \u6bcf\u5c64\u7684\u96b1\u85cf\u5c64\u7bc0\u9ede\u53ef\u8996\u70ba\u7531\u4e0a\u4e00\u5c64\u7522\u751f\u65b0\u7684\u4e00\u7d44\u7279\u5fb5\uff0c\u900f\u904e\u9019\u6a23\u9010\u5c64\u8a13\u7df4\u53ef\u4ee5\u8a13\u7df4\u66f4\u591a \u5c64\u3002 \u6211\u5011\u5be6\u9a57\u63a1\u7528\u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668(Stacked Sparse Autoencoder, SSAE)\uff0c\u5e0c\u671b\u900f\u904e SSAE \u5f97\u5230\u597d\u7684\u7279\u5fb5\u8868\u793a\u65b9\u5f0f\uff0c\u6700\u5f8c\u7d93\u7531\u5206\u985e\u5668\u7522\u751f\u66f4\u597d\u7684\u6e96\u78ba\u7387\u3002 \u7ae0\u6703\u4ecb\u7d39\u3002\u5982\u5716 3\uff0c\u6b63\u898f\u5316\u5f8c\u7684\u7279\u5fb5\u503c\uff0c\u4e00\u7a2e\u884c\u70ba\u5305\u542b 372 \u7b46 10 \u5206\u9418\u6703\u8a71(session)\uff0c\u5206 \u70ba\u6709\u6a19\u7c64\u6578\u64da(labeled data)\u548c\u6c92\u6709\u6a19\u7c64\u6578\u64da(unlabeled data)\uff0c\u6c92\u6709\u6a19\u7c64\u6578\u64da\u5229\u7528\u7a00\u758f\u81ea\u7de8 \u78bc\u5668\u4f86\u8a13\u7df4\u7db2\u7d61\u53c3\u6578\uff0c\u8a13\u7df4\u597d\u5f8c\u518d\u628a 140 \u7b46\u6709\u6a19\u7c64\u6578\u64da\u5206\u70ba\u8a13\u7df4\u8cc7\u6599\u548c\u6e2c\u8a66\u8cc7\u6599\uff0c\u8f38\u5165 \u81ea\u8a13\u7df4\u597d\u7684\u7db2\u7d61\u53c3\u6578\uff0c\u7522\u751f\u65b0\u7684\u4e00\u7d44\u7279\u5fb5\u3002\u65b0\u7684\u4e00\u7d44\u7279\u5fb5\u70ba\u4e0b\u4e00\u5c64\u8f38\u5165\u503c\uff0c\u91cd\u8907\u5229\u7528\u5716 3 \u67b6\u69cb\u53ef\u4ee5\u7522\u751f\u66f4\u591a\u5c64\u3002\u6211\u5011\u5e0c\u671b\u65b0\u7684\u7279\u5fb5\u503c\u5c0d\u65bc\u884c\u70ba\u5206\u6578\u5c07\u6709\u66f4\u597d\u7684\u8868\u793a\uff0c\u4e0b\u9762\u7ae0\u7bc0 \u6703\u8b49\u660e\u4e4b\u3002 4. \u5be6\u9a57\u8a2d\u8a08\u548c\u7d50\u679c 4.1 \u7279\u5fb5\u503c \u5982\u5716 4\uff0c\u5229\u7528\u539f\u672c LLDs\uff0c\u5728\u4e09\u7a2e\u5c0d\u8a71\u5340\u9593\u88e1(speaker domain)\uff0c\u4e08\u592b\u6642\u9593\u5340\u9593(husband\u3001 H)\u3001\u592a\u592a\u6642\u9593\u5340\u9593(wife\u3001W)\u548c\u4e0d\u5206\u4eba\u6642\u9593\u5340\u9593(full\u3001F)\u6240\u8aaa\u7684\u53e5\u5b50\uff0c\u5207\u5272\u6210\u4ee5 20%\u53e5\u5b50 1. MFCC[0-14] 2. MFB[0-7] 3. F0normlog 4. VAD(speech/no speech) 5. Intensity 6. Jitter 7. Jitter of Jitter 8. Shimmer 1. Mean 2. Median 3. Standard deviation 4. Skewness 5. Kurtosis 6. Max position 7. Min position unit Rated Spouse Acc (%) Bla (%) Pos (%) Neg (%) Sad (%) Hum (%) Avg (%) 100 Husband 67.9 76.4 65.7 78.6 52.9 61.4 67.2 Wife 70 73.6 65 74.3 58.6 59.3 66.8 200 Husband 72.9 76.4 71.4 82.1 57.1 67.1 71.2 Wife 71.4 82.9 65.7 77.1 64.9 57.9 70 300 Husband 77.1 77.9 72.1 82.9 58.6 67.1 72.6 Wife 75.7 82.1 71.4 78.6 58.6 63.6 71.7 500 Husband 70 78.6 68.6 82.9 55 62.1 69.5 4.2 1 st hidden Wife 74.3 82.1 69.3 80.7 58.6 62.9 71.3 \u5716 2. \u5806\u758a\u7a00\u758f\u81ea\u7de8\u78bc\u5668 1000 Husband 75 77.9 69.3 84.3 58.6 65.7 71.8 \u70ba\u4e00\u500b\u6642\u9593\u5340\u9593\uff0c\u5207\u5272\u5b8c\u5f8c\u5408\u6210\u4e00\u500b\u884c\u5411\u91cf\uff0c\u884c\u5411\u91cf\u7684\u7279\u5fb5\u503c\uff0c\u518d\u7d93\u7531\u5982\u8868 3 \u5716 4. \u5be6\u9a57\u7279\u5fb5\u63d0\u53d6\u67b6\u69cb Wife 72.1 79.3 69.3 80 53.6 62.9 69.5 Previous method(Black et al., 2013) Husband 78.6 72.9 72.1 84.3 60 71.4 73.2 Wife 77.9 84.3 74.3 80 66.4 67.1 75 \u8868 5. 2nd hidden unit \u5206\u6790\u4e08\u592b\u548c\u592a\u592a\u5c0d\u61c9\u5230 6 \u7a2e code \u7684\u6e96\u78ba\u7387\uff0c\u7c97\u9ad4\u5b57\u70ba\u8f03\u9ad8\u7684 \u6e96\u78ba\u7387 1 st Hidden Layer 2 nd Hidden Layer Rated Spouse Acc (%) Bla (%) Pos (%) Neg (%) Sad (%) Hum (%) Avg (%) 300 100 Husband 75 78.6 68.6 83.6 57.9 67.9 71.9 Wife 71.4 80.7 72.9 77.1 58.6 62.9 70.6 200 Husband 77.1 77.1 71.4 83.6 57.9 69.3 72.7 Wife 72.1 82.1 72 77.1 62.1 65.7 71.9 300 Husband 73.6 76.4 72.1 84.3 58.6 67.1 72 Wife 72.9 80.7 71.4 76.4 55 70 71.3 Previous method Husband 78.6 72.9 72.1 84.3 60 71.4 73.2 3.3 \u5716 3. \u5be6\u9a57\u67b6\u69cb (Black et al., 2013) Wife 77.9 84.3 74.3 80 66.4 67.1
" }, "TABREF34": { "text": "Money Order or Check payable to \"The Association for Computation Linguistics and Chinese Language Processing \" or \"\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703\" \u2027 E-mail\uff1aaclclp@hp.iis.sinica.edu.tw", "num": null, "type_str": "table", "html": null, "content": "
Publications of the Association for \u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 Computational Linguistics and Chinese Language Processing \u76f8\u95dc\u51fa\u7248\u54c1\u50f9\u683c\u8868\u53ca\u8a02\u8cfc\u55ae
\u7de8\u865f\u66f8\u76ee\u6703 \u54e1\u975e\u6703\u54e1\u518a\u6578\u91d1\u984d
1.no.92-01, no. 92-04 (\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272 \u8207 A conceptual Structure for Parsing Mandarin--itsAIRAIR
Frame and General Applications--Surface NT$ 80(US&EURP) NT$(ASIA) _____VOLUME _____AMOUNT
1. 2.2. no.92-02, no. 92-03 (\u5408\u8a02\u672c) no.92-01, no. 92-04(\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272\u8207 A Conceptual V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 \u8207V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 Structure for Parsing Mandarin --Its Frame and General Applications--3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 no.92-02 V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 & 92-03 V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868US$ 9 12120 120 360US$ 19 21_____ US$15 _____ 17 __________ _____ _____ _____ __________ _____
3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u67908 18180 18513 3011 _____ 24 __________ _____ _____ __________ _____
5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e10401513 __________ __________
6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08)103801513 __________ __________
7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u88685 18 11180 7510 30 168 24 _____ 14 __________ _____ _____ _____ __________ _____ _____
10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e8751310 __________ __________
11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e963 375 1108 86 _____ 6 __________ _____ _____ __________ _____
13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532)84001311 __________ __________
: Sentences Writing by Learners of Chinese as a 19 31 25 _____ _____ 90 _____ _____ 79-96 Chinese Written Corpus Automatically Detecting Syntactic Errors in 9 14 12 _____ _____ 18 30 26 _____ 395 _____ _____ _____ 15 25 21 _____ 340 _____ _____ _____ 4 9 7 _____ _____ 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 16. no.98-01 \u8a5e\u983b\u8a5e\u5178 16 no.98-01 \u8a5e\u983b\u8a5e\u5178 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 90 _____ _____ Foreign Language; Chang, T.-H., 20(1): 49-64 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 8 13 11 _____ _____ 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 75 _____ _____ Classifier Automatic Classification of the \"De\" Word Usage for Chinese as a Foreign Language; 20. Computational Linguistics & Chinese Languages Processing (One year) (Back issues of IJCLCLP: US$ 20 per copy) ---100 100 _____ 20 \u8ad6\u6587\u96c6 COLING 2002 \u7d19\u672c 100 _____ _____ _____ 21. Readings in Chinese Language Processing 25 25 21 _____ _____ 21. \u8ad6\u6587\u96c6 COLING 2002 \u5149\u789f\u7247 300 _____ _____ Yeh, J.-F., 20(1): 65-78 Concept Information 22. \u8ad6\u6587\u96c6 COLING 2002 Workshop \u5149\u789f\u7247 300 _____ _____
23. \u8ad6\u6587\u96c6 ISCSLP 2002 \u5149\u789f\u7247Investigating Modulation Spectrum Factorization TOTAL _____ _____ 300 _____ _____
24.Techniques for Robust Speech Recognition; Chang, T.-H., 20(2): 87-106 10% member discount: ___________Total Due:__________ (\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u67031997\u7b2c\u56db\u5b63\u5b78\u8853\u6d3b\u52d5) \u4ea4\u8ac7\u7cfb\u7d71\u66a8\u8a9e\u5883\u5206\u6790\u7814\u8a0e\u6703\u8b1b\u7fa9 130 _____ _____
Confusion Set Expansion A Study on Chinese Spelling Check Using Confusion Sets and N-gram Statistics; Lin, C.-J., 20(1): 23-48 Couple Therapy Automating Behavior Coding for Distressed Couples Interactions Based on Stacked Sparse Autoencoder Framework using ---2,500 _____ _____ 675 _____ _____ 150 _____ _____ \u5408 \u8a08 _____ _____ Signature: \u5283\u64a5\u5e33\u865f\uff1a19166251 \u4e2d\u6587\u8a08\u7b97\u8a9e\u8a00\u5b78\u671f\u520a (\u4e00\u5e74\u56db\u671f) \u5e74\u4efd\uff1a______ \u2027 OVERSEAS USE ONLY 25. (\u904e\u671f\u671f\u520a\u6bcf\u672c\u552e\u50f9500\u5143) \u2027 PAYMENT\uff1a \u25a1 Credit Card ( Preferred ) 26. Readings of Chinese Language Processing 27. \u5256\u6790\u7b56\u7565\u8207\u6a5f\u5668\u7ffb\u8b6f 1990 \u203b \u6b64\u50f9\u683c\u8868\u50c5\u9650\u570b\u5167 (\u53f0\u7063\u5730\u5340) \u4f7f\u7528 \u25a1 Name (please print): \u5283\u64a5\u5e33\u6236\uff1a\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 Speech-acoustic Features; Chen, P.-H., 20(2): 107-120 Fax: \uf997\u7d61\u96fb\u8a71\uff1a(02) 2788-3799 \u8f491502 E-mail: \uf997\u7d61\u4eba\uff1a \u9ec3\u742a \u5c0f\u59d0\u3001\u4f55\u5a49\u5982 \u5c0f\u59d0 E-mail:aclclp@hp.iis.sinica.edu.tw D Decision-making \u8a02\u8cfc\u8005\uff1a \u6536\u64da\u62ac\u982d\uff1a Address\uff1a \u5730 \u5740\uff1a HANSpeller: A Unified Framework for Chinese Spelling Correction; Xiong, J., 20(1): 1-22 \u96fb \u8a71\uff1a E-mail:
" } } } }