New Dimension Reduction Methods for Quadratic Discriminant Analysis
Publisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
Discriminant analysis (DA), including linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA), is a classical and popular approach to classification problems. It is well known that LDA is suboptimal for analyzing heteroscedastic data, for which QDA would be an ideal tool. However, QDA fails when the dimension of data is moderate or high. In this dissertation, we focus on heteroscedastic data and propose two new prediction-oriented dimension reduction methods for QDA. The first method aims to find the optimal one-dimensional subspace for projection. It can handle data heteroscedasticity with number of parameters equal to that of LDA, leading to robust classification results for data sets of moderate dimensions. We show an estimation consistency property of the method. The second method aims to find the optimal subspace for projection without information loss. We propose a scalable algorithm to approximate this subspace and the associated projection via supervised principal component analysis (PCA). It has linear time complexity in the dimension when the sample size is bounded, therefore is suitable for classification of high dimensional data. Finally, we compare our methods with LDA, QDA, regularized discriminant analysis (RDA), several modern sparse DA methods, and a two-step unsupervised PCA-based QDA method by simulated and real data examples.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeMathematics