Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering
Publisher
ASSOC COMPUTATIONAL LINGUISTICS-ACLCitation
Yadav, V., Bethard, S., & Surdeanu, M. (2020). Unsupervised alignment-based iterative evidence retrieval for multi-hop question answering. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020).Rights
Copyright © 2020 Association for Computational Linguistics. Materials are licensed on a Creative Commons Attribution 4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
Evidence retrieval is a critical stage of question answering (QA), necessary not only to improve performance, but also to explain the decisions of the corresponding QA method. We introduce a simple, fast, and unsupervised iterative evidence retrieval method, which relies on three ideas: (a) an unsupervised alignment approach to soft-align questions and answers with justification sentences using only GloVe embeddings, (b) an iterative process that reformulates queries focusing on terms that are not covered by existing justifications, which (c) a stopping criterion that terminates retrieval when the terms in the given question and candidate answers are covered by the retrieved justifications. Despite its simplicity, our approach outperforms all the previous methods (including supervised methods) on the evidence selection task on two datasets: MultiRC and QASC. When these evidence sentences are fed into a RoBERTa answer classification component, we achieve state-of-the-art QA performance on these two datasets.Note
Open access journalVersion
Final published versionCollections
Except where otherwise noted, this item's license is described as Copyright © 2020 Association for Computational Linguistics. Materials are licensed on a Creative Commons Attribution 4.0 International License.