Show simple item record

dc.contributor.authorZhao, Y.
dc.contributor.authorNgui, J.G.
dc.contributor.authorHartley, L.H.
dc.contributor.authorBethard, S.
dc.date.accessioned2022-05-20T01:36:58Z
dc.date.available2022-05-20T01:36:58Z
dc.date.issued2021
dc.identifier.citationZhao, Y., Ngui, J. G., Hartley, L. H., & Bethard, S. (2021, November). Do pretrained transformers infer telicity like humans?. In Proceedings of the 25th Conference on Computational Natural Language Learning (pp. 72-81).
dc.identifier.isbn9781955917056
dc.identifier.doi10.18653/v1/2021.conll-1.6
dc.identifier.urihttp://hdl.handle.net/10150/664507
dc.description.abstractPretrained transformer-based language models achieve state-of-the-art performance in many NLP tasks, but it is an open question whether the knowledge acquired by the models during pretraining resembles the linguistic knowledge of humans. We present both humans and pretrained transformers with descriptions of events, and measure their preference for telic interpretations (the event has a natural endpoint) or atelic interpretations (the event does not have a natural endpoint). To measure these preferences and determine what factors influence them, we design an English test and a novel-word test that include a variety of linguistic cues (noun phrase quantity, resultative structure, contextual information, temporal units) that bias toward certain interpretations. We find that humans’ choice of telicity interpretation is reliably influenced by theoretically-motivated cues, transformer models (BERT and RoBERTa) are influenced by some (though not all) of the cues, and transformer models often rely more heavily on temporal units than humans do. © 2021 Association for Computational Linguistics.
dc.language.isoen
dc.publisherAssociation for Computational Linguistics (ACL)
dc.rights© 2021 Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleDo pretrained transformers infer telicity like humans?
dc.typeProceedings
dc.typetext
dc.contributor.departmentUniversity of Arizona, Department of Linguistics
dc.contributor.departmentUniversity of Arizona, School of Information
dc.identifier.journalCoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings
dc.description.noteOpen access journal
dc.description.collectioninformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.
dc.eprint.versionFinal published version
dc.source.journaltitleCoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings
refterms.dateFOA2022-05-20T01:36:58Z


Files in this item

Thumbnail
Name:
2021.conll-1.6v2.pdf
Size:
435.0Kb
Format:
PDF
Description:
Final Published Version

This item appears in the following Collection(s)

Show simple item record

© 2021 Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.
Except where otherwise noted, this item's license is described as © 2021 Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.