Computational Natural Language Inference: Robust and Interpretable Question Answering
Author
Sharp, RebeccaIssue Date
2017Advisor
Hammond, MichaelSurdeanu, Mihai
Metadata
Show full item recordPublisher
The University of Arizona.Rights
Copyright © is held by the author, with a Creative Commons Attribution-No Derivative Works 3.0 License. Digital access to this material is made possible by the University Libraries, University of Arizona.Abstract
We address the challenging task of computational natural language inference, by which we mean bridging two or more natural language texts while also providing an explanation of how they are connected. In the context of question answering (i.e., finding short answers to natural language questions), this inference connects the question with its answer and we learn to approximate this inference with machine learning. In particular, here we present four approaches to question answering, each of which shows a significant improvement in performance over baseline methods. In our first approach, we make use of the underlying discourse structure inherent in free text (i.e. whether the text contains an explanation, elaboration, contrast, etc.) in order to increase the amount of training data for (and subsequently the performance of) a monolingual alignment model. In our second work, we propose a framework for training customized lexical semantics models such that each one represents a single semantic relation. We use causality as a use case, and demonstrate that our customized model is able to both identify causal relations as well as significantly improve our ability to answer causal questions. We then propose two approaches that seek to answer questions by learning to rank human-readable justifications for the answers, such that the model selects the answer with the best justification. The first uses a graph-structured representation of the background knowledge and performs information aggregation to construct multi-sentence justifications. The second reduces pre-processing costs by limiting itself to a single sentence and using a neural network to learn a latent representation of the background knowledge. For each of these, we show that in addition to significant improvement in correctly answering questions, we also outperform a strong baseline in terms of the quality of the answer justification given.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeLinguistics
Degree Grantor
University of ArizonaCollections
Except where otherwise noted, this item's license is described as Copyright © is held by the author, with a Creative Commons Attribution-No Derivative Works 3.0 License. Digital access to this material is made possible by the University Libraries, University of Arizona.