Show simple item record

dc.contributor.advisorCao, Siyang
dc.contributor.authorSengupta, Arindam
dc.creatorSengupta, Arindam
dc.date.accessioned2022-05-19T18:59:44Z
dc.date.available2022-05-19T18:59:44Z
dc.date.issued2022
dc.identifier.citationSengupta, Arindam. (2022). Enhancing mmWave Radar Capabilities using Sensor-Fusion and Machine Learning (Doctoral dissertation, University of Arizona, Tucson, USA).
dc.identifier.urihttp://hdl.handle.net/10150/664286
dc.description.abstractMillimeter-wave (mmWave) radars are now ubiquitous in multi-sensor applications due to their compact size, precision, penetrability, and being privacy compliant, unlike their optical counterparts. However, optical sensors' superior resolution and the wide availability of image data-sets have led to rapid development of machine learning solutions using them, pushing mmWave radars to the role of a secondary sensor. This dissertation presents a compilation of novel methods which attempt to enhance mmWave radar capabilities using sensor fusion and machine learning approaches targeting healthcare, military and autonomous perception domains. Firstly, skeletal pose estimation techniques are presented that detects 15-25 key-points with a localization error of <3cm in 3-D and can find potential applications in patient/elderly monitoring, gait analysis and recognition, and pedestrian monitoring. Secondly, an automatic radar labeling scheme is presented to encourage rapid development of radar-image datasets to aid autonomous perception. This study also included the use of a sensor-fusion feature vector, and a 12-dimensional radar feature vector for object classification, offering accuracies of 98% and 92%, respectively, in a vehicle vs pedestrian detection study. Finally, a DNN-LSTM based, and a tri-Kalman Filter based target tracking approaches were explored using radar-camera sensor-fusion, where the system not only offered improvements in localization accuracy, but was also robust to single sensor failures. The advantage of the DNN-LSTM based tracker is that it did not require prior calibration between radar and camera, and was crucial to determine the localization variance offered by individual sensors. The tri-Kalman Filter based approach used those findings for multi-object tracking, offering a 26cm precision, comparable with the state-of-the-art, with <4% missed detections - a significant improvement over the >16% FNR from the literature. The methods presented in this study significantly aids perception and makes autonomous systems safer.
dc.language.isoen
dc.publisherThe University of Arizona.
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectmmWave Radar
dc.subjectNeural Networks
dc.subjectPerception
dc.subjectSensor Fusion
dc.subjectSkeletal Pose
dc.subjectTracking
dc.titleEnhancing mmWave Radar Capabilities using Sensor-Fusion and Machine Learning
dc.typetext
dc.typeElectronic Dissertation
thesis.degree.grantorUniversity of Arizona
thesis.degree.leveldoctoral
dc.contributor.committeememberDitzler, Gregory
dc.contributor.committeememberRozenblit, Jerzy
dc.contributor.committeememberBilgin, Ali
thesis.degree.disciplineGraduate College
thesis.degree.disciplineElectrical & Computer Engineering
thesis.degree.namePh.D.
refterms.dateFOA2022-05-19T18:59:44Z


Files in this item

Thumbnail
Name:
azu_etd_19486_sip1_m.pdf
Size:
20.70Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record