A BERT-based One-Pass Multi-Task Model for Clinical Temporal Relation Extraction
Publisher
ASSOC COMPUTATIONAL LINGUISTICS-ACLCitation
Lin, C., Miller, T., Dligach, D., Sadeque, F., Bethard, S., & Savova, G. (2020, July). A BERT-based One-Pass Multi-Task Model for Clinical Temporal Relation Extraction. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing (pp. 70-75).Rights
© 2020 The Association for Computational Linguistics. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
Recently BERT has achieved a state-of-the-art performance in temporal relation extraction from clinical Electronic Medical Records text. However, the current approach is inefficient as it requires multiple passes through each input sequence. We extend a recently-proposed one-pass model for relation classification to a one-pass model for relation extraction. We augment this framework by introducing global embeddings to help with long-distance relation inference, and by multi-task learning to increase model performance and generalizability. Our proposed model produces results on par with the state-of-the-art in temporal relation extraction on the THYME corpus and is much "greener" in computational cost.Note
Open access journalVersion
Final published versionae974a485f413a2113503eed53cd6c53
10.18653/v1/2020.bionlp-1.7
Scopus Count
Collections
Except where otherwise noted, this item's license is described as © 2020 The Association for Computational Linguistics. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.

