Monocular Depth Estimation using Synthetic Data for an Augmented Reality Training System in Laparoscopic Surgery
Name:
SMC21_0521_Final.pdf
Size:
319.1Kb
Format:
PDF
Description:
Final Accepted Manuscript
Affiliation
University of Arizona, Department of Electrical and Computer EngineeringIssue Date
2021-10-17
Metadata
Show full item recordPublisher
IEEECitation
Schreiber, A. M., Hong, M., & Rozenblit, J. W. (2021). Monocular Depth Estimation using Synthetic Data for an Augmented Reality Training System in Laparoscopic Surgery. Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics.Rights
© 2021 IEEE.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
Depth estimation is an important challenge in the field of augmented reality. Supervised deep learning methods of depth estimation can be difficult to use in novel settings due to the need for labeled training data. The work presented in this paper overcomes the challenge in a laparoscopic surgical simulation environment by using synthetic data generation for RGB-D training data. We also provide a neural network architecture that can generate real-time 448x448 depth map outputs suitable for use in AR applications. Our approach shows satisfactory performance when tested on a non-synthetic test dataset with an RMSE of 2.50 cm, MAE of 1.04 cm, and δ < 1.25 of 0.987.Note
Immediate accessISSN
1062-922XVersion
Final accepted manuscriptSponsors
National Science Foundationae974a485f413a2113503eed53cd6c53
10.1109/smc52423.2021.9658708