Students Who Study Together Learn Better: On the Importance of Collective Knowledge Distillation for Domain Transfer in Fact Verification
Name:
2021.emnlp-main.558.pdf
Size:
198.5Kb
Format:
PDF
Description:
Final Published Version
Citation
Mithun, M. P., Suntwal, S., & Surdeanu, M. (2021, November). Students Who Study Together Learn Better: On the Importance of Collective Knowledge Distillation for Domain Transfer in Fact Verification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 6968-6973).Journal
EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, ProceedingsRights
Copyright © 2021 Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
While neural networks produce state-of-the-art performance in several NLP tasks, they depend heavily on lexicalized information, which transfers poorly between domains. Previous work (Suntwal et al., 2019) proposed delexicalization as a form of knowledge distillation to reduce dependency on such lexical artifacts. However, a critical unsolved issue that remains is how much delexicalization should be applied? A little helps reduce over-fitting, but too much discards useful information. We propose Group Learning (GL), a knowledge and model distillation approach for fact verification. In our method, while multiple student models have access to different delexicalized data views, they are encouraged to independently learn from each other through pair-wise consistency losses. In several cross-domain experiments between the FEVER and FNC fact verification datasets, we show that our approach learns the best delexicalization strategy for the given training dataset and outperforms state-of-the-art classifiers that rely on the original data. © 2021 Association for Computational LinguisticsNote
Open access journalISBN
9781955917094Version
Final published versionae974a485f413a2113503eed53cd6c53
10.18653/v1/2021.emnlp-main.558
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2021 Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.