• A multi-scale residual network for accelerated radial MR parameter mapping

      Fu, Zhiyang; Mandava, Sagar; Keerthivasan, Mahesh B; Li, Zhitao; Johnson, Kevin; Martin, Diego R; Altbach, Maria I; Bilgin, Ali; Univ Arizona, Dept Elect & Comp Engn; Univ Arizona, Dept Med Imaging; et al. (ELSEVIER SCIENCE INC, 2020-09-01)
      A deep learning MR parameter mapping framework which combines accelerated radial data acquisition with a multi-scale residual network (MS-ResNet) for image reconstruction is proposed. The proposed supervised learning strategy uses input image patches from multi-contrast images with radial undersampling artifacts and target image patches from artifact-free multi-contrast images. Subspace filtering is used during pre-processing to denoise input patches. For each anatomy and relaxation parameter, an individual network is trained. in vivo T1 mapping results are obtained on brain and abdomen datasets and in vivo T2 mapping results are obtained on brain and knee datasets. Quantitative results for the T2 mapping of the knee show that MS-ResNet trained using either fully sampled or undersampled data outperforms conventional model-based compressed sensing methods. This is significant because obtaining fully sampled training data is not possible in many applications. in vivo brain and abdomen results for T1 mapping and in vivo brain results for T2 mapping demonstrate that MS-ResNet yields contrast-weighted images and parameter maps that are comparable to those achieved by model-based iterative methods while offering two orders of magnitude reduction in reconstruction times. The proposed approach enables recovery of high-quality contrast-weighted images and parameter maps from highly accelerated radial data acquisitions. The rapid image reconstructions enabled by the proposed approach makes it a good candidate for routine clinical use.
    • Rapid high-resolution volumetric T mapping using a highly accelerated stack-of-stars Look Locker technique

      Li, Zhitao; Fu, Zhiyang; Keerthivasan, Mahesh; Bilgin, Ali; Johnson, Kevin; Galons, Jean-Philippe; Vedantham, Srinivasan; Martin, Diego R; Altbach, Maria I; Department of Electrical and Computer Engineering, the University of Arizona; et al. (Elsevier Inc., 2021-03-03)
      Purpose: To develop a fast volumetric T1 mapping technique. Materials and methods: A stack-of-stars (SOS) Look Locker technique based on the acquisition of undersampled radial data (>30× relative to Nyquist) and an efficient multi-slab excitation scheme is presented. A principal-component based reconstruction is used to reconstruct T1 maps. Computer simulations were performed to determine the best choice of partitions per slab and degree of undersampling. The technique was validated in phantoms against reference T1 values measured with a 2D Cartesian inversion-recovery spin-echo technique. The SOS Look Locker technique was tested in brain (n = 4) and prostate (n = 5). Brain T1 mapping was carried out with and without kz acceleration and results between the two approaches were compared. Prostate T1 mapping was compared to standard techniques. A reproducibility study was conducted in brain and prostate. Statistical analyses were performed using linear regression and Bland Altman analysis. Results: Phantom T1 values showed excellent correlations between SOS Look Locker and the inversion-recovery spin-echo reference (r2 = 0.9965; p < 0.0001) and between SOS Look Locker with slab-selective and non-slab selective inversion pulses (r2 = 0.9999; p < 0.0001). In vivo results showed that full brain T1 mapping (1 mm3) with kz acceleration is achieved in 4 min 21 s. Full prostate T1 mapping (0.9 × 0.9 × 4 mm3) is achieved in 2 min 43 s. T1 values for brain and prostate were in agreement with literature values. A reproducibility study showed coefficients of variation in the range of 0.18–0.2% (brain) and 0.15–0.18% (prostate). Conclusion: A rapid volumetric T1 mapping technique was developed. The technique enables high-resolution T1 mapping with adequate anatomical coverage in a clinically acceptable time. © 2021 Elsevier Inc.