A BERT-based One-Pass Multi-Task Model for Clinical Temporal Relation Extraction
MetadataShow full item record
PublisherASSOC COMPUTATIONAL LINGUISTICS-ACL
CitationLin, C., Miller, T., Dligach, D., Sadeque, F., Bethard, S., & Savova, G. (2020, July). A BERT-based One-Pass Multi-Task Model for Clinical Temporal Relation Extraction. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing (pp. 70-75).
Rights© 2020 The Association for Computational Linguistics. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at email@example.com.
AbstractRecently BERT has achieved a state-of-the-art performance in temporal relation extraction from clinical Electronic Medical Records text. However, the current approach is inefficient as it requires multiple passes through each input sequence. We extend a recently-proposed one-pass model for relation classification to a one-pass model for relation extraction. We augment this framework by introducing global embeddings to help with long-distance relation inference, and by multi-task learning to increase model performance and generalizability. Our proposed model produces results on par with the state-of-the-art in temporal relation extraction on the THYME corpus and is much "greener" in computational cost.
NoteOpen access journal
VersionFinal published version
Except where otherwise noted, this item's license is described as © 2020 The Association for Computational Linguistics. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.