On the Challenges of Evaluating Compositional Explanations in Multi-Hop Inference: Relevance, Completeness, and Expert Ratings
| dc.contributor.author | Jansen, P.A. | |
| dc.contributor.author | Smith, K. | |
| dc.contributor.author | Moreno, D. | |
| dc.contributor.author | Ortiz, H. | |
| dc.date.accessioned | 2022-05-19T23:19:49Z | |
| dc.date.available | 2022-05-19T23:19:49Z | |
| dc.date.issued | 2021 | |
| dc.identifier.citation | Jansen, P., Smith, K., Moreno, D., & Ortiz, H. (2021). On the Challenges of Evaluating Compositional Explanations in Multi-Hop Inference: Relevance, Completeness, and Expert Ratings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7529–7542, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. | |
| dc.identifier.isbn | 9781955917094 | |
| dc.identifier.doi | 10.18653/v1/2021.emnlp-main.596 | |
| dc.identifier.uri | http://hdl.handle.net/10150/664431 | |
| dc.description.abstract | Building compositional explanations requires models to combine two or more facts that, together, describe why the answer to a question is correct. Typically, these “multi-hop” explanations are evaluated relative to one (or a small number of) gold explanations. In this work, we show these evaluations substantially underestimate model performance, both in terms of the relevance of included facts, as well as the completeness of model-generated explanations, because models regularly discover and produce valid explanations that are different than gold explanations. To address this, we construct a large corpus of 126k domain-expert (science teacher) relevance ratings that augment a corpus of explanations to standardized science exam questions, discovering 80k additional relevant facts not rated as gold. We build three strong models based on different methodologies (generation, ranking, and schemas), and empirically show that while expert-augmented ratings provide better estimates of explanation quality, both original (gold) and expert-augmented automatic evaluations still substantially underestimate performance by up to 36% when compared with full manual expert judgements, with different models being disproportionately affected. This poses a significant methodological challenge to accurately evaluating explanations produced by compositional reasoning models. © 2021 Association for Computational Linguistics | |
| dc.language.iso | en | |
| dc.publisher | Association for Computational Linguistics (ACL) | |
| dc.rights | Copyright © 2021 Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License. | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.title | On the Challenges of Evaluating Compositional Explanations in Multi-Hop Inference: Relevance, Completeness, and Expert Ratings | |
| dc.type | Proceedings | |
| dc.type | text | |
| dc.contributor.department | University of Arizona | |
| dc.identifier.journal | EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings | |
| dc.description.note | Open access journal | |
| dc.description.collectioninformation | This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu. | |
| dc.eprint.version | Final published version | |
| dc.source.journaltitle | EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings | |
| refterms.dateFOA | 2022-05-19T23:19:49Z |

