Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative
Name:
2022.clinicalnlp-1.11.pdf
Size:
169.8Kb
Format:
PDF
Description:
Final Published Version
Citation
Lijing Wang, Timothy Miller, Steven Bethard, and Guergana Savova. 2022. Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative. In Proceedings of the 4th Clinical Natural Language Processing Workshop, pages 103–108, Seattle, WA. Association for Computational Linguistics.Rights
Copyright © 2022 Association for Computational Linguistics. Licensed on a Creative Commons Attribution 4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
In this paper, we investigate ensemble methods for fine-tuning transformer-based pretrained models for clinical natural language processing tasks, specifically temporal relation extraction from the clinical narrative. Our experimental results on the THYME data show that ensembling as a fine-tuning strategy can further boost model performance over single learners optimized for hyperparameters. Dynamic snapshot ensembling is particularly beneficial as it fine-tunes a wide array of parameters and results in a 2.8% absolute improvement in F1 over the base single learner. © 2022 Association for Computational Linguistics.Note
Open access journalISBN
9781955917773Version
Final published versionae974a485f413a2113503eed53cd6c53
10.18653/v1/2022.clinicalnlp-1.11
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2022 Association for Computational Linguistics. Licensed on a Creative Commons Attribution 4.0 International License.