Reinforcement Learning Methods for Autonomous Spacecraft Guidance, Navigation, and Control
| dc.contributor.advisor | Furfaro, Roberto | |
| dc.contributor.author | Scorsoglio, Andrea | |
| dc.creator | Scorsoglio, Andrea | |
| dc.date.accessioned | 2024-06-06T00:03:23Z | |
| dc.date.available | 2024-06-06T00:03:23Z | |
| dc.date.issued | 2024 | |
| dc.identifier.citation | Scorsoglio, Andrea. (2024). Reinforcement Learning Methods for Autonomous Spacecraft Guidance, Navigation, and Control (Doctoral dissertation, University of Arizona, Tucson, USA). | |
| dc.identifier.uri | http://hdl.handle.net/10150/672497 | |
| dc.description.abstract | Machine learning is a rapidly growing field that has the potential to revolutionize many sectors of economy and research. In this context, the increased interest in machine learning for space applications is driven by its potential to enable increased autonomy and flexibility. Indeed, machine learning can be used to develop onboard systems that can autonomously perform guidance, navigation and control tasks, reducing the need for human involvement. This can lead to increased reliability and reduced costs in ground operations. Machine learning algorithms can be used to process large amounts of multimodal data from various sensors, such as cameras, inertial measurement units and attitude sensors. This is particularly important for future missions that require precise landing to access valuable resources or to establish a human presence on planetary bodies such as the Moon, Mars, asteroids, and comets. Machine learning can also be used to develop onboard systems that can adapt to uncertain and complex scenarios, such as unexpected obstacles or system failures, which can lead to increased robustness and reliability. This dissertation aims at demonstrating the viability of autonomous real-time guidance and control based on neural networks in complex, constrained, and uncertain environments with multimodal inputs spanning from relative motion to planetary landing applications, and study their robustness and performance. Reinforcement learning and meta-reinforcement learning are used, together with a new method for autonomous hazard avoidance and landing site selection based on convolutional neural network, to increase autonomy and robustness by embedding navigation, guidance, and control in a single self-contained system, exploiting the outstanding mapping capabilities of neural networks and advanced training algorithms. These applications are also enabled by a new tool, named VisualEnv, created specifically as part of this dissertation, capable of generating 3D space environments with accurate rendering capabilities and integration with reinforcement learning algorithms. | |
| dc.language.iso | en | |
| dc.publisher | The University of Arizona. | |
| dc.rights | Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author. | |
| dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
| dc.subject | Artificial Intelligence | |
| dc.subject | Deep Learning | |
| dc.subject | Neural Networks | |
| dc.subject | Reinforcement Learning | |
| dc.subject | Spacecraft Guidance Navigation and Control | |
| dc.title | Reinforcement Learning Methods for Autonomous Spacecraft Guidance, Navigation, and Control | |
| dc.type | Electronic Dissertation | |
| dc.type | text | |
| thesis.degree.grantor | University of Arizona | |
| thesis.degree.level | doctoral | |
| dc.contributor.committeemember | Head, Larry | |
| dc.contributor.committeemember | Curti, Fabio | |
| dc.contributor.committeemember | Butcher, Eric | |
| dc.description.release | Release after 05/10/2025 | |
| thesis.degree.discipline | Graduate College | |
| thesis.degree.discipline | Systems & Industrial Engineering | |
| thesis.degree.name | Ph.D. |
