Enhancing mmWave Radar Capabilities using Sensor-Fusion and Machine Learning
MetadataShow full item record
PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractMillimeter-wave (mmWave) radars are now ubiquitous in multi-sensor applications due to their compact size, precision, penetrability, and being privacy compliant, unlike their optical counterparts. However, optical sensors' superior resolution and the wide availability of image data-sets have led to rapid development of machine learning solutions using them, pushing mmWave radars to the role of a secondary sensor. This dissertation presents a compilation of novel methods which attempt to enhance mmWave radar capabilities using sensor fusion and machine learning approaches targeting healthcare, military and autonomous perception domains. Firstly, skeletal pose estimation techniques are presented that detects 15-25 key-points with a localization error of <3cm in 3-D and can find potential applications in patient/elderly monitoring, gait analysis and recognition, and pedestrian monitoring. Secondly, an automatic radar labeling scheme is presented to encourage rapid development of radar-image datasets to aid autonomous perception. This study also included the use of a sensor-fusion feature vector, and a 12-dimensional radar feature vector for object classification, offering accuracies of 98% and 92%, respectively, in a vehicle vs pedestrian detection study. Finally, a DNN-LSTM based, and a tri-Kalman Filter based target tracking approaches were explored using radar-camera sensor-fusion, where the system not only offered improvements in localization accuracy, but was also robust to single sensor failures. The advantage of the DNN-LSTM based tracker is that it did not require prior calibration between radar and camera, and was crucial to determine the localization variance offered by individual sensors. The tri-Kalman Filter based approach used those findings for multi-object tracking, offering a 26cm precision, comparable with the state-of-the-art, with <4% missed detections - a significant improvement over the >16% FNR from the literature. The methods presented in this study significantly aids perception and makes autonomous systems safer.
Degree ProgramGraduate College
Electrical & Computer Engineering