Combining Extraction and Generation for Constructing Belief-Consequence Causal Links
AffiliationDepartment of Linguistics, University of Arizona
Computer Science Department, University of Arizona
MetadataShow full item record
CitationMaria Alexeeva, Allegra A. Beal Cohen, and Mihai Surdeanu. 2022. Combining Extraction and Generation for Constructing Belief-Consequence Causal Links. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 159–164, Dublin, Ireland. Association for Computational Linguistics.
JournalInsights 2022 - 3rd Workshop on Insights from Negative Results in NLP, Proceedings of the Workshop
RightsCopyright © 2022 Association for Computational Linguistics. This is an open access article licensed on a Creative Commons Attribution 4.0 International License.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractIn this paper, we introduce and justify a new task—causal link extraction based on beliefs—and do a qualitative analysis of the ability of a large language model—InstructGPT-3—to generate implicit consequences of beliefs. With the language model-generated consequences being promising, but not consistent, we propose directions of future work, including data collection, explicit consequence extraction using rule-based and language modeling-based approaches, and using explicitly stated consequences of beliefs to fine-tune or prompt the language model to produce outputs suitable for the task. © 2022 Association for Computational Linguistics.
NoteOpen access journal
VersionFinal published version
Except where otherwise noted, this item's license is described as Copyright © 2022 Association for Computational Linguistics. This is an open access article licensed on a Creative Commons Attribution 4.0 International License.