Monocular Depth Estimation using Synthetic Data for an Augmented Reality Training System in Laparoscopic Surgery
AffiliationUniversity of Arizona, Department of Electrical and Computer Engineering
MetadataShow full item record
CitationSchreiber, A. M., Hong, M., & Rozenblit, J. W. (2021). Monocular Depth Estimation using Synthetic Data for an Augmented Reality Training System in Laparoscopic Surgery. Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics.
Rights© 2021 IEEE.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractDepth estimation is an important challenge in the field of augmented reality. Supervised deep learning methods of depth estimation can be difficult to use in novel settings due to the need for labeled training data. The work presented in this paper overcomes the challenge in a laparoscopic surgical simulation environment by using synthetic data generation for RGB-D training data. We also provide a neural network architecture that can generate real-time 448x448 depth map outputs suitable for use in AR applications. Our approach shows satisfactory performance when tested on a non-synthetic test dataset with an RMSE of 2.50 cm, MAE of 1.04 cm, and δ < 1.25 of 0.987.
VersionFinal accepted manuscript
SponsorsNational Science Foundation