Show simple item record

dc.contributor.advisorSundareshan, Malur K.en_US
dc.contributor.authorBhalala, Smita Ashesh, 1966-
dc.creatorBhalala, Smita Ashesh, 1966-en_US
dc.date.accessioned2013-04-03T13:08:06Z
dc.date.available2013-04-03T13:08:06Z
dc.date.issued1991en_US
dc.identifier.urihttp://hdl.handle.net/10150/277969
dc.description.abstractThere have been several innovative approaches towards realizing an intelligent architecture that utilizes artificial neural networks for applications in information processing. The development of supervised training rules for updating the adjustable parameters of neural networks has received extensive attention in the recent past. In this study, specific learning algorithms utilizing modified Newton's method for the optimization of the adjustable parameters of a dynamical neural network are developed. Computer simulation results show that the convergence performance of the proposed learning schemes match very closely that of the LMS learning algorithm for applications in the design of associative memories and nonlinear mapping problems. However, the implementation of the modified Newton's method is complex due to the computation of the slope of the nonlinear sigmoidal function, whereas, the LMS algorithm approximates the slope to be zero.
dc.language.isoen_USen_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.subjectArtificial Intelligence.en_US
dc.subjectComputer Science.en_US
dc.titleModified Newton's method for supervised training of dynamical neural networks for applications in associative memory and nonlinear identification problemsen_US
dc.typetexten_US
dc.typeThesis-Reproduction (electronic)en_US
thesis.degree.grantorUniversity of Arizonaen_US
thesis.degree.levelmastersen_US
dc.identifier.proquest1345608en_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.nameM.S.en_US
dc.identifier.bibrecord.b27056028en_US
refterms.dateFOA2018-06-17T23:52:36Z
html.description.abstractThere have been several innovative approaches towards realizing an intelligent architecture that utilizes artificial neural networks for applications in information processing. The development of supervised training rules for updating the adjustable parameters of neural networks has received extensive attention in the recent past. In this study, specific learning algorithms utilizing modified Newton's method for the optimization of the adjustable parameters of a dynamical neural network are developed. Computer simulation results show that the convergence performance of the proposed learning schemes match very closely that of the LMS learning algorithm for applications in the design of associative memories and nonlinear mapping problems. However, the implementation of the modified Newton's method is complex due to the computation of the slope of the nonlinear sigmoidal function, whereas, the LMS algorithm approximates the slope to be zero.


Files in this item

Thumbnail
Name:
azu_td_1345608_sip1_m.pdf
Size:
2.535Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record