Framing QA as Building and Ranking Intersentence Answer Justifications
Affiliation
Univ Arizona, Sch InformatUniv Arizona, Dept Linguist
Univ Arizona, Dept Comp Sci
Issue Date
2017-06
Metadata
Show full item recordPublisher
MIT PRESSCitation
Framing QA as Building and Ranking Intersentence Answer Justifications 2017, 43 (2):407 Computational LinguisticsJournal
Computational LinguisticsRights
© 2017 Association for Computational Linguistics Published under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
We propose a question answering (QA) approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct. Our method first identifies the actual information needed in a question using psycholinguistic concreteness norms, then uses this information need to construct answer justifications by aggregating multiple sentences from different knowledge bases using syntactic and lexical information. We then jointly rank answers and their justifications using a reranking perceptron that treats justification quality as a latent variable. We evaluate our method on 1,000 multiple-choice questions from elementary school science exams, and empirically demonstrate that it performs better than several strong baselines, including neural network approaches. Our best configuration answers 44% of the questions correctly, where the top justifications for 57% of these correct answers contain a compelling human-readable justification that explains the inference required to arrive at the correct answer. We include a detailed characterization of the justification quality for both our method and a strong baseline, and show that information aggregation is key to addressing the information need in complex questions.Note
Open Access Journal.ISSN
0891-20171530-9312
Version
Final published versionSponsors
Allen Institute for Artificial IntelligenceAdditional Links
http://www.mitpressjournals.org/doi/abs/10.1162/COLI_a_00287ae974a485f413a2113503eed53cd6c53
10.1162/COLI_a_00287
Scopus Count
Collections
Except where otherwise noted, this item's license is described as © 2017 Association for Computational Linguistics Published under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.