{ "paper_id": "O15-1006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:10:02.345766Z" }, "title": "Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation", "authors": [ { "first": "Yi-Chung", "middle": [], "last": "Lin", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chao-Chun", "middle": [], "last": "Liang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Kuang-Yi", "middle": [], "last": "Hsu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chien-Tsung", "middle": [], "last": "Huang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Shen-Yun", "middle": [], "last": "Miao", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Wei-Yun", "middle": [], "last": "Ma", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jung", "middle": [], "last": "Liau", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "", "affiliation": {}, "email": "kysu@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "O15-1006", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Extended Abstract:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Institute of Information Science 1 , Academia Sinica", "sec_num": null }, { "text": "Since Big Data mainly aims to explore the correlation between surface features but not their underlying causality relationship, the Big Mechanism 2 program has been proposed by DARPA to find out \"why\" behind the \"Big Data\". However, the pre-requisite for it is that the machine can read each document and learn its associated knowledge, which is the task of Machine Reading (MR). Since a domain-independent MR system is complicated and difficult to build, the math word problem (MWP) [1] is frequently chosen as the first test case to study MR (as it usually uses less complicated syntax and requires less amount of domain knowledge).", "cite_spans": [ { "start": 484, "end": 487, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": null }, { "text": "According to the framework for making the decision while there are several candidates, previous MWP algebra solvers can be classified into: (1) Rule-based approaches with logic inference [2] [3] [4] [5] [6] [7] , which apply rules to get the answer (via identifying entities, quantities, operations, etc.) with a logic inference engine. (2) Rule-based approaches without logic inference [8] [9] [10] [11] [12] [13] , which apply rules to get the answer without a logic inference engine. 3Statistics-based approaches [14, 15] , which use statistical models to identify entities, quantities, operations, and get the answer. To our knowledge, all the statistics-based approaches do not adopt logic inference.", "cite_spans": [ { "start": 187, "end": 190, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 191, "end": 194, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 195, "end": 198, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 199, "end": 202, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 203, "end": 206, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 207, "end": 210, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 387, "end": 390, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 391, "end": 394, "text": "[9]", "ref_id": null }, { "start": 395, "end": 399, "text": "[10]", "ref_id": null }, { "start": 400, "end": 404, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 405, "end": 409, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 410, "end": 414, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 516, "end": 520, "text": "[14,", "ref_id": "BIBREF13" }, { "start": 521, "end": 524, "text": "15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": null }, { "text": "The main problem of the rule-based approaches mentioned above is that the coverage rate problem is serious, as rules with wide coverage are difficult and expensive to construct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": null }, { "text": "Also, since they adopt Go/No-Go approach (unlike statistical approaches which can adopt a large Top-N to have high including rates), the error accumulation problem would be severe.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": null }, { "text": "On the other hand, the main problem of those approaches without adopting logic inference is that they usually need to implement a new handling procedure for each new type of problems (as the general logic inference mechanism is not adopted). Also, as there is no inference engine to generate the reasoning chain [16] , additional effort would be required for 1 To avoid the problems mentioned above, a tag-based statistical framework which is able to perform understanding and reasoning with logic inference is proposed in this paper. It analyzes the body and question texts into their associated tag-based 3 logic forms, and then performs inference on them. Comparing to those rule-based approaches, the proposed statistical approach alleviates the ambiguity resolution problem, and the tag-based approach also provides the flexibility of handling various kinds of possible questions with the same body logic form. On the other hand, comparing to those approaches not adopting logic inference, the proposed approach is more robust to the irrelevant information and could more accurately provide the answer. Furthermore, with the given reasoning chain, the explanation could be more easily generated.", "cite_spans": [ { "start": 312, "end": 316, "text": "[16]", "ref_id": null }, { "start": 359, "end": 360, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": null }, { "text": "The main contributions of our work are: (1) proposing a tag-based logic representation such that the system is more robust to the irrelevant information and could provide the answer more precisely; (2) proposing a unified statistical framework for performing reasoning from the given text. Based on the semantic representation given above, the TC will assign the operation type \"Sum\" to it. The LFC will then extract the following two facts from the first sentence: quan(q1,\u679d,n1p)=2361&verb(q1,\u9032\u8ca8)&agent(q1,\u6587\u5177\u5e97)&head(n1p,\u7b46)&color(n1p,\u7d05) quan(q2,\u679d,n2p)=1587&verb(q2,\u9032\u8ca8)&agent(q2,\u6587\u5177\u5e97)&head(n2p,\u7b46)&color(n2p,\u85cd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Framework", "sec_num": null }, { "text": "The quantity-fact \"2361 \u679d\u7d05\u7b46 (2361 red pens)\" is represented by \"quan(q1,\u679d,n1p)=2361\", where the argument \"n1p\" 4 denotes \"\u7d05\u7b46 (red pens)\" due to the facts \"head(n1p,\u7b46)\" and \"color(n1p,\u7d05)\". Likewise, the quantity-fact \"1587 \u679d\u85cd\u7b46 (1587 blue pens)\" is represented by \"quan(q2,\u679d,n2p)=1587\". The LFC also issues the utility call \"ASK Sum(quan(?q,\u679d, \u7b46),verb(?q,\u9032\u8ca8)&agent(?q,\u6587\u5177\u5e97))\" (based on the assigned solution type) for the question. Finally, the IE will select out two quantity-facts \"quan(q1,\u679d,n1p)=2361\" and \"quan(q2, \u679d,n2p)=1587\", and then perform \"Sum\" operation on them to obtain \"3948\". If the question in the above example is \"\u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7d05\u7b46 (How many red pens did the stationer buy)?\", the LFC will generate the following facts and utility call for this new question: head(n3p,\u7b46)&color(n3p,\u7d05) ASK Sum(quan(?q,\u679d,n3p),verb(?q,\u9032\u8ca8)&agent(?q,\u6587\u5177\u5e97))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Framework", "sec_num": null }, { "text": "As the result, the IE will only select the quantity-fact \"quan(q1,\u679d,n1p)=2361\", because the modifier in QLF (i.e., \"color(n3p,\u7d05)\") cannot match the associated modifier \"\u85cd (blue)\" (i.e., \"color(n2p,\u85cd)\") of \"quan(q2,\u679d,n2p)=1587\". After performing \"Sum\" operation on it, we thus obtain the answer \"2361\". (We will skip EG due to space limitation. Please refer to [17] for the details).", "cite_spans": [ { "start": 360, "end": 364, "text": "[17]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Framework", "sec_num": null }, { "text": "Currently Table 3 shows the statistics of the converted corpus.", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 17, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Preliminary Results", "sec_num": null }, { "text": "We have completed a prototype system and have tested it on the seed corpus. The success of our pilot run has demonstrated the feasibility of the proposed approach. We plan to use the next few months to perform weakly supervised learning [18] and fine tune the system. ", "cite_spans": [ { "start": 237, "end": 241, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary Results", "sec_num": null }, { "text": "The associated modifiers in the logic form (such as verb(q1,\u9032\u8ca8), agent(q1,\u6587\u5177\u5e97), head(n1p,\u7b46), color(n1p, \u7d05), color(n2p,\u85cd) in the example of the next page) are regarded as various tags (or conditions) for selecting the appropriate information related to the question specified later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The subscript \"p\" in \"n1p\" indicates that \"n1p\" is a pseudo nonterminal derived from the nonterminal \"n1\", which has four terminals \"2361\", \"\u679d\", \"\u7d05\" and \"\u7b46\". More details about pseudo nonterminal will be given at Section 2.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A review of methods for automatic understanding of natural language mathematical problems", "authors": [ { "first": "A", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "U", "middle": [], "last": "Garain", "suffix": "" } ], "year": 2008, "venue": "Artif Intell Rev", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Mukherjee, U. Garain, A review of methods for automatic understanding of natural language mathematical problems, Artif Intell Rev, (2008).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Natural language input for a computer problem solving system", "authors": [ { "first": "D", "middle": [ "G" ], "last": "Bobrow", "suffix": "" } ], "year": 1964, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.G. Bobrow, Natural language input for a computer problem solving system, Ph.D. Dissertation, Massachusetts Institute of Technology, (1964).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Experiments with a deductive question-answering program", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Slagle", "suffix": "" } ], "year": 1965, "venue": "J-CACM", "volume": "8", "issue": "", "pages": "792--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Slagle, Experiments with a deductive question-answering program, J-CACM 8(1965) 792-798.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "CARPS, a program which solves calculus word problems", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, CARPS, a program which solves calculus word problems, Report MAC-TR-51, Project MAC, MIT, (1968).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Computer solution of calculus word problems", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1969, "venue": "Proc. of International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, Computer solution of calculus word problems, In Proc. of International Joint Conference on Artificial Intelligence, (1969).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A computer simulation of children's arithmetic word-problem solving", "authors": [ { "first": "D", "middle": [], "last": "Dellarosa", "suffix": "" } ], "year": 1986, "venue": "Behavior Research Methods, Instraments, & Computers", "volume": "18", "issue": "", "pages": "147--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Dellarosa, A computer simulation of children's arithmetic word-problem solving, Behavior Research Methods, Instraments, & Computers, 18 (1986) 147-154.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Robust Understanding of Word Problems With Extraneous Information", "authors": [ { "first": "Y", "middle": [], "last": "Bakman", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bakman, Robust Understanding of Word Problems With Extraneous Information, (2007 Jan).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Experiments with a natural language problem solving system", "authors": [ { "first": "J", "middle": [ "P" ], "last": "Gelb", "suffix": "" } ], "year": 1971, "venue": "Pros. of IJCAI-71", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.P. Gelb, Experiments with a natural language problem solving system, In Pros. of IJCAI-71, (1971).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "INTERACTIVE NATURAL LANGUAGE PROBLEM SOLVING:A PRAGMATIC APPROACH In Pros. of the first conference on applied natural language processing", "authors": [ { "first": "A", "middle": [], "last": "Biermann", "suffix": "" }, { "first": "R", "middle": [], "last": "Rodman", "suffix": "" }, { "first": "B", "middle": [], "last": "Ballard", "suffix": "" }, { "first": "T", "middle": [], "last": "Betancourt", "suffix": "" }, { "first": "G", "middle": [], "last": "Bilbro", "suffix": "" }, { "first": "H", "middle": [], "last": "Deas", "suffix": "" }, { "first": "L", "middle": [], "last": "Fineman", "suffix": "" }, { "first": "P", "middle": [], "last": "Fink", "suffix": "" }, { "first": "K", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "D", "middle": [], "last": "Gregory", "suffix": "" }, { "first": "F", "middle": [], "last": "Heidlage", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Biermann, R. Rodman, B. Ballard, T. Betancourt, G. Bilbro, H. Deas, L. Fineman, P. Fink, K. Gilbert, D. Gregory, F. Heidlage, INTERACTIVE NATURAL LANGUAGE PROBLEM SOLVING:A PRAGMATIC APPROACH In Pros. of the first conference on applied natural language processing, (1982).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "COMPUTER SIMULATION -Understanding and solving arithmetic word problems: A computer simulation", "authors": [ { "first": "C", "middle": [ "R" ], "last": "Fletcher", "suffix": "" } ], "year": 1985, "venue": "Behavior Research Methods, Instruments, & Computers", "volume": "17", "issue": "", "pages": "565--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.R. Fletcher, COMPUTER SIMULATION -Understanding and solving arithmetic word problems: A computer simulation, Behavior Research Methods, Instruments, & Computers, 17 (1985) 565-571.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning to Solve Arithmetic Word Problems with Verb Categorization", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Hosseini", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "O", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "N", "middle": [], "last": "Kushman", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.J. Hosseini, H. Hajishirzi, O. Etzioni, N. Kushman, Learning to Solve Arithmetic Word Problems with Verb Categorization, EMNLP, (2014).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning to Automatically Solve Algebra Word Problems", "authors": [ { "first": "N", "middle": [], "last": "Kushman", "suffix": "" }, { "first": "Y", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Kushman, Y. Artzi, L. Zettlemoyer, R. Barzilay, Learning to Automatically Solve Algebra Word Problems, ACL, (2014).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Reasoning about Quantities in Natural Language", "authors": [ { "first": "S", "middle": [ "I" ], "last": "Roy", "suffix": "" }, { "first": "T", "middle": [ "J H" ], "last": "Vieira", "suffix": "" }, { "first": "D", "middle": [ "I" ], "last": "Roth", "suffix": "" } ], "year": 2015, "venue": "TACL", "volume": "3", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "S.I. Roy, T.J.H. Vieira, D.I. Roth, Reasoning about Quantities in Natural Language, TACL, 3 (2015) 1-13.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Explanation Generation for a Math Word Problem Solver", "authors": [ { "first": "C", "middle": [ "T" ], "last": "Huang", "suffix": "" }, { "first": "Y", "middle": [ "C" ], "last": "Lin", "suffix": "" }, { "first": "K", "middle": [ "Y" ], "last": "Su", "suffix": "" } ], "year": 2016, "venue": "International Journal of Computational Linguistics and Chinese Language Processing (IJCLCLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.T. Huang, Y.C. Lin, K.Y. Su, Explanation Generation for a Math Word Problem Solver, to be published at International Journal of Computational Linguistics and Chinese Language Processing (IJCLCLP), (2016).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Weakly supervised learning of semantic parsers for mapping instructions to actions", "authors": [ { "first": "Y", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2013, "venue": "Transactions of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "49--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Artzi, L. Zettlemoyer, Weakly supervised learning of semantic parsers for mapping instructions to actions, Transactions of the Association for Computational Linguistics, 1 (2013) 49-62.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "(a) Math Word Problem Solver Diagram (b) Problem Resolution Diagram The block diagram of the proposed Math Word Problem Solver. The block diagram of the proposed MWP solver is shown in Figure 1. First, every sentence in the MWP, including both body text and the question text, is analyzed by the Language Analysis module, which transforms each sentence into its corresponding semantic representation tree. The sequence of semantic representation trees is then sent to the Problem Resolution module, which adopts the logic inference approach to obtain the answer for each question. Finally, the Explanation Generation (EG) module will explain how the answer is obtained (in natural language text) according to the given reasoning chain. As the figure depicted, the Problem Resolution module in our system consists of three components: Solution Type Classifier (TC), Logic Form Converter (LFC) and Inference Engine (IE). TC suggests a way to solve the problem for every question in an MWP. In order to perform logic inference, the LFC first extracts the related facts from the given semantic representation tree and then represents them as First Order Logic (FOL) predicates/functions [16]. It also transforms each question into an FOL-like utility function according to the assigned solution type. Finally, according to inference rules, the IE derives new facts from the old ones provided by the LFC. Besides, it is also responsible for providing utilities to perform math operations on related facts. Take the MWP \"\u6587\u5177\u5e97\u9032\u8ca8 2361 \u679d\u7d05\u7b46\u548c 1587 \u679d\u85cd\u7b46 (A stationer bought 2361 red pens and 1587 blue pens), \u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7b46 (How many pens did the stationer buy)?\" as an example. Figure 2 shows the Semantic Representation of this example. Semantic Representation of (a)\"\u6587\u5177\u5e97\u9032\u8ca8 2361 \u679d\u7d05\u7b46\u548c 1587 \u679d\u85cd\u7b46 (A stationer bought 2361 red pens and 1587 blue pens), (b)\u6587\u5177\u5e97\u5171\u9032\u8ca8\u5e7e\u679d\u7b46(How many pens did the stationer buy)?\"", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": ", we have completed all the associated modules (including Word Segmenter, Syntactic Parser, Semantic Composer, TC, LFC, IE, and EG), and have manually annotated 75 samples (in our elementary school math corpus) as the seed corpus (with syntactic tree, semantic tree, logic form, and reasoning chain annotated). Besides, we have cleaned the original elementary school math corpus and encoded it into the appropriate XML format. There are total 23,493 problems divided into six grades; and the average number of words of the body text is 18.2 per problem.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "content": "", "num": null, "type_str": "table", "html": null, "text": "The 2015 Conference on Computational Linguistics and Speech Processing ROCLING 2015, pp. 58-63 \uf0d3 The Association for Computational Linguistics and Chinese Language Processing generating the explanation." }, "TABREF1": { "content": "
Corpus Training SetNum. of problems 20,093CorpusAvg. Chinese Chars.Avg. Chinese Words
Develop Set1,700Body2718.2
Test Set1,700Question9.46.8
Total23,493
MWP corpus statisticsAverage length per problem
", "num": null, "type_str": "table", "html": null, "text": "MWP corpus statistics and Average length per problem" } } } }