Show simple item record

dc.contributor.advisorMarcellin, Michael W.en_US
dc.contributor.authorPanchapakesan, Kannan
dc.creatorPanchapakesan, Kannanen_US
dc.date.accessioned2013-05-09T09:33:30Z
dc.date.available2013-05-09T09:33:30Z
dc.date.issued2000en_US
dc.identifier.urihttp://hdl.handle.net/10150/289118
dc.description.abstractVector quantization (VQ) is an established data compression technique. It has been successfully used to compress signals such as speech, imagery, and video. In recent years, it has been employed to perform various image processing tasks such as edge detection, classification, and volume rendering. The advantage of using VQ depends on the specific task but usually includes memory gain, computational gain, or the inherent compression it offers. Nonlinear interpolative vector quantization (NLIVQ) was introduced as an approach to overcome the curse of dimensionality incurred by an unconstrained, exhaustive-search VQ, especially, at high rates. In this dissertation, it is modified to accomplish specific image processing tasks. VQ-based techniques are introduced to achieve the following image processing tasks. (1) Blur identification: VQ encoder distortion is used to identify image blur. The blur is estimated by choosing from a finite set of candidate blur functions. A VQ codebook is trained on images corresponding to each candidate blur. The blur in an image is then identified by choosing from the candidates, the one corresponding to the codebook that provides the lowest encoder distortion. (2) Superresolution: Images obtained through a diffraction-limited optical system do not possess any information beyond a certain cut-off frequency and are therefore limited in their resolution. Superresolution refers to the endeavor of improving the resolution of such images. Superresolution is achieved through an NLIVQ trained on pairs of original and blurred images. (3) Joint compression and restoration: Combining compression and restoration in one step is useful from the standpoints of memory and computing needs. An NLIVQ is suggested for this purpose that performs the restoration entirely in the wavelet transform domain. The training set for VQ design consists of pairs of original and blurred images. (4) Combined compression and denoising: Compression of a noisy source is a classic problem that involves the combined efforts of compression and denoising (estimation). A robust NLIVQ technique is presented that first identifies the variance of the noise in an image and subsequently performs simultaneous compression and denoising.
dc.language.isoen_USen_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.titleImage processing through vector quantizationen_US
dc.typetexten_US
dc.typeDissertation-Reproduction (electronic)en_US
thesis.degree.grantorUniversity of Arizonaen_US
thesis.degree.leveldoctoralen_US
dc.identifier.proquest9965917en_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplineElectrical and Computer Engineeringen_US
thesis.degree.namePh.D.en_US
dc.identifier.bibrecord.b40482431en_US
refterms.dateFOA2018-06-23T16:56:49Z
html.description.abstractVector quantization (VQ) is an established data compression technique. It has been successfully used to compress signals such as speech, imagery, and video. In recent years, it has been employed to perform various image processing tasks such as edge detection, classification, and volume rendering. The advantage of using VQ depends on the specific task but usually includes memory gain, computational gain, or the inherent compression it offers. Nonlinear interpolative vector quantization (NLIVQ) was introduced as an approach to overcome the curse of dimensionality incurred by an unconstrained, exhaustive-search VQ, especially, at high rates. In this dissertation, it is modified to accomplish specific image processing tasks. VQ-based techniques are introduced to achieve the following image processing tasks. (1) Blur identification: VQ encoder distortion is used to identify image blur. The blur is estimated by choosing from a finite set of candidate blur functions. A VQ codebook is trained on images corresponding to each candidate blur. The blur in an image is then identified by choosing from the candidates, the one corresponding to the codebook that provides the lowest encoder distortion. (2) Superresolution: Images obtained through a diffraction-limited optical system do not possess any information beyond a certain cut-off frequency and are therefore limited in their resolution. Superresolution refers to the endeavor of improving the resolution of such images. Superresolution is achieved through an NLIVQ trained on pairs of original and blurred images. (3) Joint compression and restoration: Combining compression and restoration in one step is useful from the standpoints of memory and computing needs. An NLIVQ is suggested for this purpose that performs the restoration entirely in the wavelet transform domain. The training set for VQ design consists of pairs of original and blurred images. (4) Combined compression and denoising: Compression of a noisy source is a classic problem that involves the combined efforts of compression and denoising (estimation). A robust NLIVQ technique is presented that first identifies the variance of the noise in an image and subsequently performs simultaneous compression and denoising.


Files in this item

Thumbnail
Name:
azu_td_9965917_sip1_m.pdf
Size:
1.948Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record