How May i Help You? Using Neural Text Simplification to Improve Downstream NLP Tasks
Name:
2021.findings-emnlp.343.pdf
Size:
199.9Kb
Format:
PDF
Description:
Final Published Version
Affiliation
Department of Computer Science, University of ArizonaIssue Date
2021
Metadata
Show full item recordCitation
Van, H., Tang, Z., & Surdeanu, M. (2021, November). How May I Help You? Using Neural Text Simplification to Improve Downstream NLP Tasks. In Findings of the Association for Computational Linguistics: EMNLP 2021 (pp. 4074-4080).Rights
Copyright © 2021 Association for Computational Linguistics. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
The general goal of text simplification (TS) is to reduce text complexity for human consumption. In this paper, we investigate another potential use of neural TS: assisting machines performing natural language processing (NLP) tasks. We evaluate the use of neural TS in two ways: simplifying input texts at prediction time and augmenting data to provide machines with additional information during training. We demonstrate that the latter scenario provides positive effects on machine performance on two separate datasets. In particular, the latter use of TS significantly improves the performances of LSTM (1.82-1.98%) and SpanBERT (0.7-1.3%) extractors on TACRED, a complex, large-scale, real-world relation extraction task. Further, the same setting yields significant improvements of up to 0:65% matched and 0:62% mismatched accuracies for a BERT text classifier on MNLI, a practical natural language inference dataset. © 2021 Association for Computational Linguistics.Note
Open access journalISBN
9781955917100Version
Final published versionCollections
Except where otherwise noted, this item's license is described as Copyright © 2021 Association for Computational Linguistics. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.