Time-varying quasi-closed-phase analysis for accurate formant tracking in speech signals
Name:
Time_varying_quasi_closed_phas ...
Size:
1.867Mb
Format:
PDF
Description:
Final Accepted Manuscript
Affiliation
Univ ArizonaIssue Date
2020Keywords
Time-varying linear predictionweighted linear prediction
quasi-closed-phase analysis
formant tracking
Metadata
Show full item recordCitation
D. Gowda, S. R. Kadiri, B. Story and P. Alku, "Time-Varying Quasi-Closed-Phase Analysis for Accurate Formant Tracking in Speech Signals," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 1901-1914, 2020, doi: 10.1109/TASLP.2020.3000037.Rights
© 2020 IEEE.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
In this paper, we propose a new method for the accurate estimation and tracking of formants in speech signals using time-varying quasi-closed-phase (TVQCP) analysis. Conventional formant tracking methods typically adopt a two-stage estimateand-track strategy wherein an initial set of formant candidates are estimated using short-time analysis (e.g., 10-50 ms), followed by a tracking stage based on dynamic programming or a linear state-space model. One of the main disadvantages of these approaches is that the tracking stage, however good it may he, cannot improve upon the formant estimation accuracy of the first stage. The proposed TVQCP method provides a single-stage formant tracking that combines the estimation and tracking stages into one. TVQCP analysis combines three approaches to improve formant estimation and tracking: (1) it uses temporally weighted quasi-closed-phase analysis to derive closed-phase estimates of the vocal tract with reduced interference from the excitation source, (2) it increases the residual sparsity by using the L-1 optimization and (3) it uses time-varying linear prediction analysis over long time windows (e.g., 100-200 ms) to impose a continuity constraint on the vocal tract model and hence on the formant trajectories. Formant tracking experiments with a wide variety of synthetic and natural speech signals show that the proposed TVQCP method performs better than conventional and popular formant tracking tools, such as Wavesurfer and Praat (based on dynamic programming), the KARMA algorithm (based on Kalman filtering), and DeepFormants (based on deep neural networks trained in a supervised manner). Matlab scripts for the proposed method can be found at: https://github.com/njaygowda/ftrackISSN
2329-9290EISSN
2329-9304Version
Final accepted manuscriptae974a485f413a2113503eed53cd6c53
10.1109/taslp.2020.3000037