Explainable Multi-Step Reasoning Over Natural Language
dc.contributor.advisor | Surdeanu, Mihai | |
dc.contributor.author | Liang, Zhengzhong | |
dc.creator | Liang, Zhengzhong | |
dc.date.accessioned | 2023-12-16T00:07:15Z | |
dc.date.available | 2023-12-16T00:07:15Z | |
dc.date.issued | 2023 | |
dc.identifier.citation | Liang, Zhengzhong. (2023). Explainable Multi-Step Reasoning Over Natural Language (Doctoral dissertation, University of Arizona, Tucson, USA). | |
dc.identifier.uri | http://hdl.handle.net/10150/670236 | |
dc.description.abstract | Despite the significant progress of reasoning over natural language in the recent years, multi-step natural language reasoning is still a great challenge to the current algorithms. The challenges come from three aspects. First, some multi-step reasoning problems require the retrieval of evidence from large knowledge bases, while neither traditional information retrieval (IR) methods nor neural IR methods yield satisfactory results. Second, neural language models are largely black boxes, and how the language models solve the multi-step reasoning problems are unclear, lacking interpretability. Third, neural language models suffer from compositional generalization issues when solving multi-step reasoning problems, meaning that when the models are trained on simple tasks but tested on the hard tasks that require more reasoning steps than in training, they tend to fail. In this dissertation we seek to mitigate these challenges. First, we propose a simple yet effective algorithm to combine traditional word-overlap based IR methods with neural IR methods. As the first step, we design linear probe tasks to formally show that traditional IR and neural IR are good at handling different queries. Following this observation, we propose a routing algorithm that selects whether to use traditional IR or neural IR for each query. The empirical evaluation on three datasets shows that the proposed routing algorithm yields equally good or better performance than individual IR methods. Second, we propose Explainable Verbal Reasoner (EVR) to increase the interpretability and reduce the compositional generalization issue of language models. EVR increases the interpretability of language models in two ways: (1) EVR decomposes a complex multi-step reasoning problem to a few simple problems, and different decomposed problems are finished by dedicated modules; (2) for a complex multi-step reasoning problem, EVR generates the intermediate reasoning steps in the text form, including symbolic operators and natural language explanations. Empirical evaluation on a synthetic multi-step reasoning dataset shows that EVR is able to largely reduce the compositional generalization issue of language models, compared with a few strong baselines. Finally, we propose Explainable Verbal Reasoner Plus (EVR+), a reasoning framework that can handle diverse types of reasoning. EVR+ is also proposed to increase the interpretability and reduce the compositional generalization issue of language models when handling multi-step reasoning problems, and EVR+ shares the same key features as EVR. However, EVR+ is able to handle much more diverse types of reasoning problems than EVR. We achieve this by allowing EVR+ to generate very diverse types of symbolic operators, and devise an interpreter that can parse and execute such operators. To evaluate EVR+, we propose SynthCompR, a synthetic dataset containing five tasks that require compositional reasoning over natural language. Results show that EVR+ is able to achieve much better compositional generalization performance than a strong neural baseline. | |
dc.language.iso | en | |
dc.publisher | The University of Arizona. | |
dc.rights | Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author. | |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
dc.subject | generative language model | |
dc.subject | machine reasoning | |
dc.title | Explainable Multi-Step Reasoning Over Natural Language | |
dc.type | Electronic Dissertation | |
dc.type | text | |
thesis.degree.grantor | University of Arizona | |
thesis.degree.level | doctoral | |
dc.contributor.committeemember | Bethard, Steven | |
dc.contributor.committeemember | Levine, Joshua | |
dc.contributor.committeemember | Tandon, Ravi | |
dc.description.release | Release after 09/27/2024 | |
thesis.degree.discipline | Graduate College | |
thesis.degree.discipline | Computer Science | |
thesis.degree.name | Ph.D. |