Exploring Text Representations for Generative Temporal Relation Extraction
Name:
2022.clinicalnlp-1.12.pdf
Size:
113.3Kb
Format:
PDF
Description:
Final Published Version
Citation
Dmitriy Dligach, Steven Bethard, Timothy Miller, and Guergana Savova. 2022. Exploring Text Representations for Generative Temporal Relation Extraction. In Proceedings of the 4th Clinical Natural Language Processing Workshop, pages 109–113, Seattle, WA. Association for Computational Linguistics.Rights
Copyright © 2022 Association for Computational Linguistics. Licensed on a Creative Commons Attribution 4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
Sequence-to-sequence models are appealing because they allow both encoder and decoder to be shared across many tasks by formulating those tasks as text-to-text problems. Despite recently reported successes of such models, we find that engineering input/output representations for such text-to-text models is challenging. On the Clinical TempEval 2016 relation extraction task, the most natural choice of output representations, where relations are spelled out in simple predicate logic statements, did not lead to good performance. We explore a variety of input/output representations, with the most successful prompting one event at a time, and achieving results competitive with standard pairwise temporal relation extraction systems. © 2022 Association for Computational Linguistics.Note
Open access journalISBN
9781955917773Version
Final published versionae974a485f413a2113503eed53cd6c53
10.18653/v1/2022.clinicalnlp-1.12
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2022 Association for Computational Linguistics. Licensed on a Creative Commons Attribution 4.0 International License.