Automatic Radar-Camera Dataset Generation for Sensor-Fusion Applications
Affiliation
Electrical and Computer Engineering, University of ArizonaIssue Date
2022-04Keywords
Calibration and IdentificationCameras
Data Sets for Robotic Vision
Laser radar
Neural and Fuzzy Control
Object Detection
Optical sensors
Radar
Radar detection
Radar imaging
Segmentation and Categorization
Sensor Fusion
Sensors
Metadata
Show full item recordCitation
Sengupta, A., Yoshizawa, A., & Cao, S. (2022). Automatic Radar-Camera Dataset Generation for Sensor-Fusion Applications. IEEE Robotics and Automation Letters.Rights
© 2022 IEEE.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
With heterogeneous sensors offering complementary advantages in perception, there has been a significant growth in sensor-fusion based research and development in object perception and tracking using classical or deep neural networks based approaches. However, supervised learning requires massive labeled datasets, that require expensive manual labor to generate. This paper presents a novel approach that leverages YOLOv3 based highly accurate object detection from camera to automatically label point cloud data obtained from a co-calibrated radar sensor to generate labelled radar-image and radar-only datasets to aid learning algorithms for different applications. To achieve this we first co-calibrate the vision and radar sensors and obtain a radar-to-camera transformation matrix. The collected radar returns are segregated by different targets using a density based clustering scheme and the cluster centroids are projected onto the camera image using the transformation matrix. The Hungarian Algorithm is then used to associate the radar cluster centroids with the YOLOv3 generated bounding box centroids, and are labeled with the predicted class. The proposed approach is efficient, easy to implement and aims to encourage rapid development of multi-sensor datasets, which are extremely limited currently, compared to the optical counterparts. The calibration process, software pipeline and the dataset generation is described in detail. Furthermore preliminary results from two sample applications for object detection using the datasets are also presented.Note
Immediate accessEISSN
2377-37662377-3774
Version
Final accepted manuscriptSponsors
Sony Research Award Programae974a485f413a2113503eed53cd6c53
10.1109/lra.2022.3144524
