Reinforcement Metalearning for Interception of Maneuvering Exoatmospheric Targets with Parasitic Attitude Loop
AffiliationUniversity of Arizona, Department of Systems and Industrial Engineering
University of Arizona, Department of Aerospace and Mechanical Engineering
MetadataShow full item record
CitationGaudet, B., Furfaro, R., Linares, R., & Scorsoglio, A. (2021). Reinforcement Metalearning for Interception of Maneuvering Exoatmospheric Targets with Parasitic Attitude Loop. Journal of Spacecraft and Rockets, 58(2), 386-399.
RightsCopyright © 2020 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractThis Paper uses Reinforcement Meta-Learning to optimize an adaptive integrated guidance, navigation, and control system suitable for exoatmospheric interception of a maneuvering target. The system maps observations consisting of strapdown seeker angles and rate gyroscope measurements directly to thruster on/off commands. Using a high fidelity six-degree-of-freedom simulator, this Paper demonstrates that the optimized policy can adapt to parasitic effects including seeker angle measurement lag, thruster control lag, the parasitic attitude loop resulting from scale factor errors and Gaussian noise on angle and rotational velocity measurements, and a time-varying center of mass caused by fuel consumption and slosh. Importantly, the optimized policy gives good performance over a wide range of challenging target maneuvers. Unlike previous work that enhances range observability by inducing line of sight oscillations, this Paper’s system is optimized to use only measurements available from the seeker and rate gyros. Through extensive Monte Carlo simulation of randomized exoatmospheric interception scenarios, this Paper demonstrates that the optimized policy gives performance close to that of augmented proportional navigation with perfect knowledge of the full engagement state. The optimized system is computationally efficient and requires minimal memory and should be compatible with today’s flight processors.
VersionFinal accepted manuscript