Name:
MegaStitch_Robust_Large_Scale_ ...
Size:
17.80Mb
Format:
PDF
Description:
Final Published Version
Affiliation
Department of Computer Science, University of ArizonaSchool of Plant Science, University of Arizona
Data Science Institute, University of Arizona
Issue Date
2022Keywords
Bundle adjustmentCameras
Drones
Feature extraction
Global Positioning System
Image stitching
Optimization
Metadata
Show full item recordCitation
Zarei, A., Gonzalez, E., Merchant, N., Pauli, D., Lyons, E., & Barnard, K. (2022). MegaStitch: Robust Large Scale Image Stitching. IEEE Transactions on Geoscience and Remote Sensing.Rights
Copyright © 2022 The Author(s). This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
We address fast image stitching for large image collections while being robust to drift due to chaining transformations and minimal overlap between images. We focus on scientific applications where ground truth accuracy is far more important than visual appearance or projection error, which can be misleading. For common large-scale image stitching use cases, transformations between images are often restricted to similarity or translation. When homography is used in these cases, the odds of being trapped in a poor local minimum and producing unnatural results increases. Thus, for transformations up to affine, we cast stitching as minimizing reprojection error globally using linear least squares with a few, simple constraints. For homography, we observe that the global affine solution provides better initialization for bundle adjustment compared to an alternative that initializes with a homography-based scaffolding, and at lower computational cost. We evaluate our methods on a very large translation dataset with limited overlap, as well as four drone datasets. We show that our approach is better compared to alternative methods such as MGRAPH in terms of computational cost, scaling to large numbers of images, and robustness to drift. We also contribute ground truth datasets for this endeavor.Note
Open access articleISSN
0196-2892EISSN
1558-0644Version
Final published versionSponsors
Department of Energy Biological and Environmental Researchae974a485f413a2113503eed53cd6c53
10.1109/tgrs.2022.3141907
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2022 The Author(s). This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/.