We are upgrading the repository! A content freeze is in effect until December 6th, 2024 - no new submissions will be accepted; however, all content already published will remain publicly available. Please reach out to repository@u.library.arizona.edu with your questions, or if you are a UA affiliate who needs to make content available soon. Note that any new user accounts created after September 22, 2024 will need to be recreated by the user in November after our migration is completed.

Show simple item record

dc.contributor.advisorHunt, Bobby R.en_US
dc.contributor.authorDavila, Carlos Antonio
dc.creatorDavila, Carlos Antonioen_US
dc.date.accessioned2013-04-25T10:15:33Z
dc.date.available2013-04-25T10:15:33Z
dc.date.issued1999en_US
dc.identifier.urihttp://hdl.handle.net/10150/284549
dc.description.abstractSuper-resolution is the process by which the bandwidth of a diffraction-limited spectrum is extended beyond the optical passband. Many algorithms exist which are capable of super-resolution; however most are iterative methods, which are ill-suited for real-time operation. One approach that has been virtually ignored in super-resolution research is the neural network approach. The Hopfield network has been a popular choice in image restoration applications, however it is also an iterative approach. We consider the feedforward architecture known as a Multilayer Perceptron (MLP), and present results on simulated binary and greyscale images blurred by a diffraction-limited OTF and sampled at the Nyquist rate. To avoid aliasing, the network performs as a nonlinear spatial interpolator while simultaneously extrapolating in the frequency domain. Additionally, a novel use of vector quantization for the generation of training data sets is presented. This is accomplished by training a nonlinear vector quantizer (NLIVQ), whose codebooks are subsequently used in the supervised training of the MLP network using Back-Propagation. The network shows good regularization in the presence of noise.
dc.language.isoen_USen_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.subjectPhysics, Optics.en_US
dc.titleImage super-resolution performance of multilayer feedforward neural networksen_US
dc.typetexten_US
dc.typeDissertation-Reproduction (electronic)en_US
thesis.degree.grantorUniversity of Arizonaen_US
thesis.degree.leveldoctoralen_US
dc.identifier.proquest9934855en_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplineElectrical and Computer Engineeringen_US
thesis.degree.namePh.D.en_US
dc.description.noteThis item was digitized from a paper original and/or a microfilm copy. If you need higher-resolution images for any content in this item, please contact us at repository@u.library.arizona.edu.
dc.identifier.bibrecord.b39652245en_US
dc.description.admin-noteOriginal file replaced with corrected file September 2023.
refterms.dateFOA2018-07-03T00:26:00Z
html.description.abstractSuper-resolution is the process by which the bandwidth of a diffraction-limited spectrum is extended beyond the optical passband. Many algorithms exist which are capable of super-resolution; however most are iterative methods, which are ill-suited for real-time operation. One approach that has been virtually ignored in super-resolution research is the neural network approach. The Hopfield network has been a popular choice in image restoration applications, however it is also an iterative approach. We consider the feedforward architecture known as a Multilayer Perceptron (MLP), and present results on simulated binary and greyscale images blurred by a diffraction-limited OTF and sampled at the Nyquist rate. To avoid aliasing, the network performs as a nonlinear spatial interpolator while simultaneously extrapolating in the frequency domain. Additionally, a novel use of vector quantization for the generation of training data sets is presented. This is accomplished by training a nonlinear vector quantizer (NLIVQ), whose codebooks are subsequently used in the supervised training of the MLP network using Back-Propagation. The network shows good regularization in the presence of noise.


Files in this item

Thumbnail
Name:
azu_td_9934855_sip1_c.pdf
Size:
47.31Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record