Robust Multiobject Tracking Using Mmwave Radar-Camera Sensor Fusion
Name:
Robust Multi Object Tracking ...
Size:
5.303Mb
Format:
PDF
Description:
Final Accepted Manuscript
Affiliation
Department of Electrical and Computer Engineering, University of ArizonaIssue Date
2022-10Keywords
CamerasKalman filter
Kalman filters
MmWave radar
perception
Radar
Radar detection
Radar imaging
Radar tracking
sensor-fusion
Sensors
tracking
Metadata
Show full item recordCitation
Sengupta, A., Cheng, L., & Cao, S. (2022). Robust Multi-Object Tracking Using Mmwave Radar-Camera Sensor Fusion. IEEE Sensors Letters, 1–4.Journal
IEEE Sensors LettersRights
Copyright © IEEE Sensors Letters.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
With the recent hike in the autonomous and automotive industries, sensor-fusion-based perception has garnered significant attention for multiobject classification and tracking applications. Furthering our previous work on sensorfusion-based multiobject classification, this letter presents a robust tracking framework using a high-level monocularcamera and millimeter wave radar sensor-fusion. The proposed method aims to improve the localization accuracy by leveraging the radar’s depth and the camera’s cross-range resolutions using decision-level sensor fusion and make the system robust by continuously tracking objects despite single sensor failures using a tri-Kalman filter setup. The camera’s intrinsic calibration parameters and the height of the sensor placement are used to estimate a birds-eye view of the scene, which in turn aids in estimating 2-D position of the targets from the camera. The radar and camera measurements in a given frame is associated using the Hungarian algorithm. Finally, a tri-Kalman filter-based framework is used as the tracking approach. The proposed approach offers promising MOTA and MOTP metrics including significantly low missed detection rates that could aid large-scale and small-scale autonomous or robotics applications with safe perception.Note
Immediate accessEISSN
2475-1472Version
Final accepted manuscriptae974a485f413a2113503eed53cd6c53
10.1109/lsens.2022.3213529