We are upgrading the repository! A content freeze is in effect until December 6th, 2024 - no new submissions will be accepted; however, all content already published will remain publicly available. Please reach out to repository@u.library.arizona.edu with your questions, or if you are a UA affiliate who needs to make content available soon. Note that any new user accounts created after September 22, 2024 will need to be recreated by the user in November after our migration is completed.
Supervised and Self-Supervised Learning for Accelerated Medical Imaging
Author
Fu, ZhiyangIssue Date
2021Keywords
Dedicated Breast CTDeep Learning
MR Parameter Mapping
Radial MRI
Self-supervised Learning
Supervised Learning
Advisor
Bilgin, AliAltbach, Maria I.
Metadata
Show full item recordPublisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
Medical imaging technologies are life-changing owing to their non-invasive approaches to early detection of diseases. X-ray Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are two routinely used imaging modalities. CT scans are typically performed in seconds yet can expose patients to X-ray ionizing radiation; MRI utilizing radiofrequency waves is free of ionizing radiation, yet, is notorious for its long scan times. Data undersampling is a common and practical approach to reducing ionizing radiation in CT and reducing scan times in MRI. While data undersampling eases the burden of data acquisition, it puts higher requirements on image reconstruction as the undersampling artifacts start to manifest. This dissertation exploits the emerging technique, deep learning, to tackle the problem of image reconstruction from undersampled data in both CT and MRI applications. This dissertation explores learned reconstruction in supervised and unsupervised settings. We first propose a fully supervised reconstruction method for sparse-view dedicated breast CT, leading to three times radiation dose reduction. Supervised learning typically requires a vast quantity of fully sampled data. To address this limitation, we propose a supervised reconstruction using accelerated reference as the training label for multiple MR parameter mapping applications. The accelerated references obtained from model based compressed sensing (CS) methods are slow yet accurate. By mapping to these accelerated references, the proposed learning method reduces the reconstruction times by several orders of magnitude in comparison to conventional CS methods, making them clinically relevant. Building off of these supervised learning methods, we propose a self-supervised learning framework that combines k-space view sharing techniques with image-space Noise2Noise denoising for highly accelerated radial MR parameter mapping. We show superior parameter accuracy than view sharing techniques and improved image perceptual quality over CS methods at high data acceleration rates. Importantly, the proposed self-supervised framework eliminates the need for fully sampled references or accelerated references. Lastly, it is not uncommon that streaking artifacts due to the nonlinearities of magnetic field gradients can appear in radial abdominal imaging. We propose a novel streak cancellation technique that leverages the coil spatial redundancy of MR parallel imaging. These anomaly streaks are represented in a low-dimensional subspace and eliminated when the coil signal is projected onto the orthogonal complement of the subspace. The proposed technique is spatially invariant and can be incorporated into existing reconstruction pipelines using iterative algorithm or deep learning.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeElectrical & Computer Engineering