AuthorGalbraith, John M.
AdvisorZiolkowski, Richard W.
MetadataShow full item record
PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractThis dissertation combines recent theoretical models from the neuroscience community with recent advancements in parallel computing to implement a large simulation that emulates the motion pathway of a mammalian visual system. The simulation ran in real time, and was used to perform real world obstacle detection and avoidance with an autonomous, mobile robot. Data is shown from experimental trials of the robot navigating an obstacle course in which the robot had both strategic waypoint finding goals and obstacle avoidance tactical goals that were successfully integrated into a single navigational behavior. The simulator is distinguished from many previous robotics efforts due to its size and faithfulness to neuroscience. It employs population coded representations of motion energy (similar to brain area V1) and velocity (similar to brain area MT). In addition, it implements new features engineered to close the control loop between a mammal's early vision system and a mobile robot. Several problems routinely encountered in robotics experiments are discussed. Novel solutions to these problems are proposed that take specific advantage of the population coded representations of visual features, and implemented in the simulator. These results are applicable to future engineering efforts to build special purpose circuits, analog or digital, for the purpose of emulating biological information processing algorithms for robotics and image understanding tasks.
Degree ProgramGraduate College
Electrical and Computer Engineering