A model of speech production based on the acoustic relativity of the vocal tract
Name:
JASA-EL-SB2019_REV2.pdf
Size:
9.846Mb
Format:
PDF
Description:
Final Accepted Manuscript
Affiliation
Univ Arizona, Speech Language & Hearing SciIssue Date
2019-10-17
Metadata
Show full item recordPublisher
ACOUSTICAL SOC AMER AMER INST PHYSICSCitation
The Journal of the Acoustical Society of America 146, 2522 (2019); doi: 10.1121/1.5127756Rights
Copyright © 2019 Acoustical Society of America.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
A model is described in which the effects of articulatory movements to produce speech are generated by specifying relative acoustic events along a time axis. These events consist of directional changes of the vocal tract resonance frequencies that, when associated with a temporal event function, are transformed via acoustic sensitivity functions, into time-varying modulations of the vocal tract shape. Because the time course of the events may be considerably overlapped in time, coarticulatory effects are automatically generated. Production of sentence-level speech with the model is demonstrated with audio samples and vocal tract animations. (C) 2019 Acoustical Society of America.Note
6 month embargo; published online: 17 October 2019ISSN
0001-4966PubMed ID
31671993Version
Final accepted manuscriptae974a485f413a2113503eed53cd6c53
10.1121/1.5127756
