{ "paper_id": "O15-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:10:16.316206Z" }, "title": "", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "O15-1007", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Machine Reading (MR) aims to make the knowledge contained in the text available in forms that machines can use them for automated processing. That is, machines will learn to read from a few examples and they will read to learn what they need in order to answer questions or perform some reasoning task [1] . Since a domain-independent MR system is difficult to build, the Math Word Problem (MWP) [2] is frequently chosen as the first test case to study MR. The main reason for that is that MWP not only has less complicated syntax but also requires less amount of domain knowledge.", "cite_spans": [ { "start": 302, "end": 305, "text": "[1]", "ref_id": null }, { "start": 396, "end": 399, "text": "[2]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The architecture of our proposed approach [3] is shown in Figure 1 . First, every sentence in the MWP, including both body text and the question text, is analyzed by the Language Analysis module, which transforms each sentence into its corresponding semantic representation tree. The sequence of semantic representation trees is then sent to the Problem Resolution module, which adopts logic inference approach, to obtain the answer of each question in the MWP. Finally, the Explanation Generation (EG) module will explain how the answer is found (in natural language text) according to the given reasoning chain [4] (which includes all related logic statements and inference steps to reach the answer). and is a task of Natural Language Generation (NLG).", "cite_spans": [ { "start": 42, "end": 45, "text": "[3]", "ref_id": "BIBREF3" }, { "start": 613, "end": 616, "text": "[4]", "ref_id": null } ], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Various applications of NLG (such as weather report) have been proposed before [5] [6] [7] [8] [9] [10] [11] . However, to the best of our knowledge, none of them discusses how to generate the explanation for WMP, which possesses some special characteristics (e.g., math operation 2 oriented description) that are not shared with other tasks. This paper therefore proposes a math operation oriented approach to explain how the answer is obtained in solving math word problems.", "cite_spans": [ { "start": 79, "end": 82, "text": "[5]", "ref_id": "BIBREF5" }, { "start": 83, "end": 86, "text": "[6]", "ref_id": "BIBREF6" }, { "start": 87, "end": 90, "text": "[7]", "ref_id": "BIBREF7" }, { "start": 91, "end": 94, "text": "[8]", "ref_id": "BIBREF8" }, { "start": 95, "end": 98, "text": "[9]", "ref_id": "BIBREF9" }, { "start": 99, "end": 103, "text": "[10]", "ref_id": "BIBREF10" }, { "start": 104, "end": 108, "text": "[11]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Based on the reasoning chain given by the IE [3] , we first search each math operator involved.", "cite_spans": [ { "start": 45, "end": 48, "text": "[3]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "For each math operator, we generate one sentence. Since explaining math operation does not require complicated syntax, we adopt a specific template to generate the text for each kind of math operator. To the best of our knowledge, this is the first explanation generation that is specifically tailored to the math word problem. shows how surface realization is done with non-slot fillers (circled by ellipses) and slot-fillers (the diamond shape is for operators, and the rectangle one is for quantities). Also, as shown at Figure 3(b) , the (#a, #b) pair denotes facts derived from the body sentences. The OP means the operator used to deduce implicit facts and represented as non-leaf circle nodes. Each \"G?\" expresses a sentence to be generated. Given the reasoning chain, the first step is to decide how many sentences will be generated, which corresponds to the Discourse Planning phase [12] of the traditional NLG task. Currently, we will generate one sentence for each operator shown in the reasoning chain. For the above example, since there are four operators (three IE-Multiplication 3 and one LFC-Sum in Figure 4 ), we will have four corresponding sentences; and the associated nodes (i.e., content) are circled by \"G?\"", "cite_spans": [ { "start": 892, "end": 896, "text": "[12]", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 524, "end": 535, "text": "Figure 3(b)", "ref_id": "FIGREF4" }, { "start": 1115, "end": 1123, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "for each sentence in the figure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "Furthermore, Figure 4 shows that three sets of facts are originated from the 2 nd body sentence (indicated by three S2 nodes). Each set contains a corresponding quantity-fact (e.g., q1(\u758a), q2(\u5143), and q3(\u5f35)) and its associated object (e.g., n1, n2, and n3). For example, the first set (the left most one) contains q1(\u758a) (for \"2 \u758a\") and n1 (for \"\u4e00\u842c\u5143\u9214\u7968\"). This figure also shows that the outputs of three IE-Multiplication operators (i.e., \"20,000 \u5143\", \"6,000 \u5143\", and \"1,300 \u5143\") will be fed into the last LFC-Sum to get the final desired result \"27,300 \u5143\" (denoted by the \"Ans(SUM)\" node in the figure). Our EG of the MWP solver is able to explain how the answer is resulted in a human comprehensible way, where the related reasoning steps can be systemically accomplished from the giving reasoning chain according to the specified template.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "The main contributions of this paper are shown as follows,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "1. The Explanation Tree is introduced for facilitating the discourse planning on MWP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "2. An operator oriented algorithm is proposed to segment the Explanation Tree into various sentences, which makes our Discourse Planner universal for math word problems regardless of the language adopted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "3. We propose using operator-based templates to generate the natural language text for explaining the associated math operation. Admittedly, the work related to multi-template per operator can be further explored after examining more cases. In this case, a statistical model would be required to select the most appropriate template for each given operation..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methods", "sec_num": null }, { "text": "Prefixes \"IE-\" and \"LFC-\" denote that those operators is issued by IE and LFC, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "The DARPA Machine Reading Program -Encouraging Linguistic and Reasoning Research with a Series of Reading Tasks", "authors": [ { "first": "J", "middle": [], "last": "Schrag", "suffix": "" }, { "first": "", "middle": [], "last": "Wright", "suffix": "" } ], "year": 2010, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schrag, J. Wright, The DARPA Machine Reading Program -Encouraging Linguistic and Reasoning Research with a Series of Reading Tasks, LREC, (2010).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A review of methods for automatic understanding of natural language mathematical problems", "authors": [ { "first": "A", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "U", "middle": [], "last": "Garain", "suffix": "" } ], "year": 2008, "venue": "Artif Intell Rev", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Mukherjee, U. Garain, A review of methods for automatic understanding of natural language mathematical problems, Artif Intell Rev, (2008).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation", "authors": [ { "first": "Y", "middle": [ "C" ], "last": "Lin", "suffix": "" }, { "first": "C", "middle": [ "C" ], "last": "Liang", "suffix": "" }, { "first": "K", "middle": [ "Y" ], "last": "Hsu", "suffix": "" }, { "first": "C", "middle": [ "T" ], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [ "Y" ], "last": "Miao", "suffix": "" }, { "first": "W", "middle": [ "Y" ], "last": "Ma", "suffix": "" }, { "first": "L", "middle": [ "W" ], "last": "Ku", "suffix": "" }, { "first": "C", "middle": [ "J" ], "last": "Liau", "suffix": "" }, { "first": "K", "middle": [ "Y" ], "last": "Su", "suffix": "" } ], "year": 2016, "venue": "International Journal of Computational Linguistics and Chinese Language Processing (IJCLCLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y.C. Lin, C.C. Liang, K.Y. Hsu, C.T. Huang, S.Y. Miao, W.Y. Ma, L.W. Ku, C.J. Liau, K.Y. Su, Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation, to be published at International Journal of Computational Linguistics and Chinese Language Processing (IJCLCLP), (2016).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An Introduction to Functional Grammar", "authors": [ { "first": "M", "middle": [ "A K" ], "last": "Halliday", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.A.K. Halliday, An Introduction to Functional Grammar., Edward Arnold, London, (1985b).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using natural-language processing to produce weather forecasts", "authors": [ { "first": "E", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "N", "middle": [], "last": "Driedger", "suffix": "" }, { "first": "R", "middle": [], "last": "Kittredge", "suffix": "" } ], "year": 1994, "venue": "IEEE Expert", "volume": "9", "issue": "", "pages": "45--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Goldberg, N. Driedger, R. Kittredge, Using natural-language processing to produce weather forecasts, IEEE Expert, 9 (1994) 45-53.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Drafter: An interactive support tool for writing multilingual instructions", "authors": [ { "first": "C", "middle": [], "last": "Paris", "suffix": "" }, { "first": "K", "middle": [], "last": "Vander Linden", "suffix": "" } ], "year": 1996, "venue": "IEEE Computer", "volume": "29", "issue": "", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Paris, K. Vander Linden, Drafter: An interactive support tool for writing multilingual instructions, IEEE Computer, 29 (1996) 49-56.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Content selection in comparison generation", "authors": [ { "first": "M", "middle": [], "last": "Milosavljevic", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 6th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "72--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Milosavljevic, Content selection in comparison generation, In Proceedings of the 6th European Workshop on Natural Language Generation, Duisburg, Germany, (1997 March) 72-81.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic document creation from software specifications", "authors": [ { "first": "C", "middle": [], "last": "Paris", "suffix": "" }, { "first": "K", "middle": [], "last": "Vander Linden", "suffix": "" }, { "first": "S", "middle": [], "last": "Lu", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 3rd Australian Document Computing Symposium (ADCS-98)", "volume": "", "issue": "", "pages": "26--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Paris, K. Vander Linden, S. Lu, Automatic document creation from software specifications, Proceedings of the 3rd Australian Document Computing Symposium (ADCS-98), (1998) 26-31.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Interactive generation and knowledge administration in MultiM\u00b4et\u00b4eo", "authors": [ { "first": "J", "middle": [], "last": "Coch", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Ninth International Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Coch, Interactive generation and knowledge administration in MultiM\u00b4et\u00b4eo, In Proceedings of the Ninth International Workshop on Natural Language Generation, (1998 Aug).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Types of knowledge required to personalise smoking cessation letters", "authors": [ { "first": "E", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "R", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "L", "middle": [], "last": "Osman", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Joint European Conference on Artificial Intelligence in Medicine and Medical Decision Making", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Reiter, R. Robertson, L. Osman, Types of knowledge required to personalise smoking cessation letters, In Proceedings of the Joint European Conference on Artificial Intelligence in Medicine and Medical Decision Making. Springer-Verlag, (1999).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Speech and Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2000, "venue": "", "volume": "20", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Jurafsky, J.H. Martin, Speech and Language Processing, Chapter 20, Prentice Hall, Englewood Cliffs, New Jersey (2000).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The block diagram of the proposed Math Word Problem Solver.1 128 Academia Road, Section 2, Nankang, Taipei 11529, Taiwan", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "shows the block diagram of our proposed EG. First, the IE generates the answer and its associated reasoning chain for the given math problem. To ease the operation of the EG, we first convert the given reasoning chain into its corresponding Explanation Tree (shown atFigure 4) to center around each operator appearing in solving the MWP (which would be convenient to perform sentence segmentation later). Afterwards, the Explanation Tree will be fed into the Discourse Planner. The last stage is the Function Word Insertion & Ordering Module, which inserts the necessary functional words to the segmented sentences (resulted from Discourse Planner) and generates the explanation texts according to the selected template (based on the operator encountered).", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Block Diagram of the proposed MWP Explanation Generator Following example demonstrates how the framework works. And Figure 3(a) reveals more details for each part illustrated in Figure 2. [Sample-1] \u963f\u5fd7\u8cb7\u4e00\u81fa\u51b0\u7bb1\u548c\u4e00\u81fa\u96fb\u8996\u6a5f\uff0c\u4ed8 2 \u758a\u4e00\u842c\u5143\u9214\u7968\u30016 \u5f35\u5343\u5143\u9214\u7968\u548c 13 \u5f35 \u767e\u5143\u9214\u7968\uff0c\u963f\u5fd7\u5171\u4ed8\u4e86\u5e7e\u5143\uff1f (A-Zhi bought a refrigerator and a TV, paid 2 piles of ten-thousand-dollar bill, six thousand-dollar bill and 13 hundred-dollar bill. How many dollars did A-Zhi totally pay?) Facts Generation in Figure 3(a) shows how the body text is transformed into meaningful logic facts to perform inference. In math problems, the facts are mostly related to quantities. The generated facts are either the quantities explicitly appearing in the sentence text or the implicit quantities deduced by the IE. Those generated facts are linked together within the reasoning chain constructed by the IE as shown in Figure 3(b). Within this framework, the discourse planner is responsible for selecting the associated content for each sentence to be generated. Figure 3(c) shows how the contents in the Explanation Tree are used as fillers to fill in the template slots for generating the explanation sentences.A typical reasoning chain, represented with an Explanation Tree structure, is shown at", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "The operator-node (OP_node) layers and quantity-node (Quan_node) layers are interleaved within the Explanation Tree, and serve as the input to OP Oriented Algorithm in Discourse Planner. (a) Facts Generation (b) Reasoning Chain (c) Function Word Insertion & Ordering Module, serving as the Surface Realizer. It", "num": null, "type_str": "figure", "uris": null }, "FIGREF4": { "text": "(a) Facts Generated from the Body Text. (b) The associated Reasoning Chain, where \"G#\" shows the facts grouped within the same sentence. (c) Explanation texts generated by the Function Word Insertion & Ordering Module for this example (labeled as G1~G4). Except those ellipses which symbolize non-slot fillers, other shapes denote slot-fillers. Furthermore, Diamond symbolizes OP_node while Rectangle symbolizes Quan_node.", "num": null, "type_str": "figure", "uris": null }, "FIGREF5": { "text": "Explanation Tree for Discourse Planning, where S2 means the facts from the 2 nd body sentence.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "text": "The 2015 Conference on Computational Linguistics and Speech Processing ROCLING 2015, pp. 64-70 \uf0d3 The Association for Computational Linguistics and Chinese Language ProcessingAs depicted in Figure 1(b), the Problem Resolution module in the proposed system consists of three components: Solution Type Classifier (TC), Logic Form Converter (LFC)and Inference Engine (IE). TC is responsible to assign a math operation type for every question of the MWP. In order to perform logic inference, the LFC first extracts the related facts from the given semantic representation tree and then represents them in First Order Logic (FOL) predicates/functions form[4]. In addition, it is also responsible for transforming every question into an FOL-like utility function according to the assigned solution type.Finally, according to inference rules, the IE derives new facts from the old ones provided by the LFC. Additionally, it is also responsible for providing utilities to perform math operations on related facts.", "html": null, "type_str": "table", "content": "