Show simple item record

dc.contributor.advisorHunt, Bobby R.en_US
dc.contributor.authorSheppard, David Glen, 1962-
dc.creatorSheppard, David Glen, 1962-en_US
dc.date.accessioned2013-04-18T09:40:54Z
dc.date.available2013-04-18T09:40:54Z
dc.date.issued1997en_US
dc.identifier.urihttp://hdl.handle.net/10150/282324
dc.description.abstractImages acquired by ground-based telescopes are severely degraded by atmospheric turbulence effects. New algorithms are presented for restoration with super-resolution of satellite object images from sequences of turbulence-degraded observations. Super-resolution refers to recovery of Fourier spectral components outside the optical system passband. Modern wave front sensor (WFS) can measure the optical distortions caused by the atmosphere. Such measurement can be used for (1) control of an adaptive optics (AO) system; (2) for post-processing of the uncompensated image; and (3) for a hybrid approach involving partially compensated images. This study focuses on the second of these approaches. Quantitative simulation of imaging through turbulence and WFS are used to demonstrate the performance of new super-resolving multiframe algorithms based on Bayes maximum a posteriori (MAP) criterion. The original and object images are assumed to have Poisson statistics. The resulting Poisson MAP algorithms extend the single frame version to the multiframe case. Super-resolution is demonstrated for realistic conditions. In the blind deconvolution problem, both the original image and the degradations must be derived simultaneously from the recorded images without the aid of WFS. We investigate this problem and propose a new multiframe algorithm based on Bayes maximum likelihood. Strict constraints such as positivity and finite bandwidth are incorporated using nonlinear reparameterizations. Nonlinear conjugate gradient techniques are employed along with implementation on the massively parallel IBM SP2, in order to meet the computational demands of these algorithms. Super-resolution is demonstrated for realistic circumstances. On a related subject, nonlinear interpolative vector quantization (NLIVQ) is presented as a tool for the novel application of vector quantization (VQ) to super-resolution of diffraction-limited images. The algorithm is trained on a large set of image pairs, consisting of an original and its diffraction-limited counterpart, and exploits the statistical dependence between blocks of pixels in the two images. The discrete cosine transform (DCT) is used to manage the codebook complexity and simplify training. Simulation results are presented which demonstrate improvements in the visual quality and peak signal-to-noise ratio. A study of restored image spectra reveals modest super-resolution. The prospects for this technique are promising.
dc.language.isoen_USen_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.subjectRemote Sensing.en_US
dc.titleImage super-resolution: Iterative multiframe algorithms and training of a nonlinear vector quantizeren_US
dc.typetexten_US
dc.typeDissertation-Reproduction (electronic)en_US
thesis.degree.grantorUniversity of Arizonaen_US
thesis.degree.leveldoctoralen_US
dc.identifier.proquest9729489en_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplineElectrical and Computer Engineeringen_US
thesis.degree.namePh.D.en_US
dc.description.noteThis item was digitized from a paper original and/or a microfilm copy. If you need higher-resolution images for any content in this item, please contact us at repository@u.library.arizona.edu.
dc.identifier.bibrecord.b34812295en_US
dc.description.admin-noteOriginal file replaced with corrected file October 2023.
refterms.dateFOA2018-06-27T19:11:28Z
html.description.abstractImages acquired by ground-based telescopes are severely degraded by atmospheric turbulence effects. New algorithms are presented for restoration with super-resolution of satellite object images from sequences of turbulence-degraded observations. Super-resolution refers to recovery of Fourier spectral components outside the optical system passband. Modern wave front sensor (WFS) can measure the optical distortions caused by the atmosphere. Such measurement can be used for (1) control of an adaptive optics (AO) system; (2) for post-processing of the uncompensated image; and (3) for a hybrid approach involving partially compensated images. This study focuses on the second of these approaches. Quantitative simulation of imaging through turbulence and WFS are used to demonstrate the performance of new super-resolving multiframe algorithms based on Bayes maximum a posteriori (MAP) criterion. The original and object images are assumed to have Poisson statistics. The resulting Poisson MAP algorithms extend the single frame version to the multiframe case. Super-resolution is demonstrated for realistic conditions. In the blind deconvolution problem, both the original image and the degradations must be derived simultaneously from the recorded images without the aid of WFS. We investigate this problem and propose a new multiframe algorithm based on Bayes maximum likelihood. Strict constraints such as positivity and finite bandwidth are incorporated using nonlinear reparameterizations. Nonlinear conjugate gradient techniques are employed along with implementation on the massively parallel IBM SP2, in order to meet the computational demands of these algorithms. Super-resolution is demonstrated for realistic circumstances. On a related subject, nonlinear interpolative vector quantization (NLIVQ) is presented as a tool for the novel application of vector quantization (VQ) to super-resolution of diffraction-limited images. The algorithm is trained on a large set of image pairs, consisting of an original and its diffraction-limited counterpart, and exploits the statistical dependence between blocks of pixels in the two images. The discrete cosine transform (DCT) is used to manage the codebook complexity and simplify training. Simulation results are presented which demonstrate improvements in the visual quality and peak signal-to-noise ratio. A study of restored image spectra reveals modest super-resolution. The prospects for this technique are promising.


Files in this item

Thumbnail
Name:
azu_td_9729489_sip1_c.pdf
Size:
17.78Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record