Native Listeners’ Use of Information in Parsing Ambiguous Casual Speech
Name:
brainsci-12-00930-v3.pdf
Size:
1.379Mb
Format:
PDF
Description:
Final Published Version
Affiliation
Department of Linguistics, University of ArizonaIssue Date
2022
Metadata
Show full item recordPublisher
MDPICitation
Warner, N., Brenner, D., Tucker, B. V., & Ernestus, M. (2022). Native Listeners’ Use of Information in Parsing Ambiguous Casual Speech. Brain Sciences, 12(7).Journal
Brain SciencesRights
Copyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
In conversational speech, phones and entire syllables are often missing. This can make “he’s” and “he was” homophonous, realized for example as [ɨz]. Similarly, “you’re” and “you were” can both be realized as [jɚ], [ɨ], etc. We investigated what types of information native listeners use to perceive such verb tenses. Possible types included acoustic cues in the phrase (e.g., in “he was”), the rate of the surrounding speech, and syntactic and semantic information in the utterance, such as the presence of time adverbs such as “yesterday” or other tensed verbs. We extracted utterances such as “So they’re gonna have like a random roommate” and “And he was like, ‘What’s wrong?!’” from recordings of spontaneous conversations. We presented parts of these utterances to listeners, in either a written or auditory modality, to determine which types of information facilitated listeners’ comprehension. Listeners rely primarily on acoustic cues in or near the target words rather than meaning and syntactic information in the context. While that information also improves comprehension in some conditions, the acoustic cues in the target itself are strong enough to reverse the percept that listeners gain from all other information together. Acoustic cues override other information in comprehending reduced productions in conversational speech. © 2022 by the authors.Note
Open access journalISSN
2076-3425Version
Final published versionae974a485f413a2113503eed53cd6c53
10.3390/brainsci12070930
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).