David Y Zeng^{1}, Jamil Shaikh^{2}, Dwight G Nishimura^{1}, Shreyas S Vasanawala^{2}, and Joseph Y Cheng^{2}

3D cones trajectories have the flexibility to be more scan-time efficient than 3D Cartesian trajectories, especially with long readouts. However, long readouts are subject to blurring from off-resonance, limiting the efficiency. We propose a convolutional residual network to correct for off-resonance artifacts to allow for reduced scan time. Fifteen exams were acquired with both conservative readout durations and readouts 2.4x as long. Long-readout images were corrected with the proposed method. The corrected long-readout images had non-inferior (p<0.01) reader scores in all features examined compared to conservative readout images.

**Introduction**

**Methods**

Dataset Creation

Training data was acquired on a 3T GE scanner with a 32-channel cardiac coil and a ferumoxytol-enhanced, ultra-short-echo-time (0.03ms) scan using 3D cones with short readouts between 0.9–1.5ms,Deep Learning

We used a supervised 3D convolutional neural network (CNN) to correct the off-resonance artifacts. The input to the network is a 3D image with two channels corresponding to real and imaginary components. The network architecture is three residual layers of 128 channels with 5x5x5 kernelsResult Analysis

Two board-certified radiologists were independently presented in blinded fashion with four randomized, simultaneous images: uncorrected long readout, long readout with autofocus correction, long readout with deep learning correction, and uncorrected short readout. Image quality was evaluated for eight anatomic features, primarily for vessel definition, on a 5-point scale: 5-Excellent, 4-Good, 3-Moderate, 2-Poor, 1-Non-diagnostic. Significance of difference in scores (p<0.01) was determined by one-way ANOVA with post-hoc Tukey’s test**Results**

Sample images from each of the four methods are shown in Figure 3. The long-readout images have the most apparent off-resonance artifacts, and vessel definition is lost in Figure 3a) and b). Autofocus correction recovers vessel definition in the pulmonary artery and hepatic veins but the internal mammary arteries remain incoherent. Deep learning correction produces sharper pulmonary arteries and hepatic veins, producing longer coherent vessel segments. The internal mammary arteries are coherent and the left internal mammary arteries are distinguishable.

Field maps for both deep-learning-corrected and autofocus-corrected images were calculated by applying off-resonance on the original image and finding the closest match with the autofocus metric (Figure 4). The similarity of the field maps between the two methods gives confidence that the deep learning method is not hallucinating new structures into its output.

Statistical analysis results from two readers are shown in Figure 5. For both readers, deep learning images were not inferior (p<0.01) to any images in any features.

**Discussion**

These results demonstrate that the proposed deep learning method produces images non-inferior to short-readout images while having a 2.4x shorter scan. We demonstrated with a simple architecture that deep learning can effectively model and correct for off-resonance blurring. The performance can be further improved by longer training, more advanced architecture, and more accurate ground truth.

The deep learning images were also non-inferior to autofocus images and superior in several cases even though the neural network was trained on images corrected by autofocus. Although autofocus may not always resolve all off-resonance artifacts, perhaps the neural network is learning only the appropriate corrections.

Additionally, autofocus is computationally intensive because each candidate frequency must be simulated and reconstructed. Even with a field map, correction would take too long to be clinically viable. In contrast, our method does not need a field map and a typical dataset requires under a minute to be corrected with the proposed network, fast enough to be viable for clinical workflow.

From a theoretical approach, the signal equation
for off-resonance without relaxation models off-resonance as a non-stationary
convolution in the image domain^{1,9}. Thus,
it could be interpreted that the CNN is learning the appropriate non-stationary
deconvolution kernel. An additional factor in increasing readout time is T_{2}^{*} decay and
it is likely that the CNN is also learning to remove associated blur.

1. Chen W, Sica CT, Meyer CH. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients. Magn Reson Med. 2008;60(5):1104-1111.

2. Gurney PT, Hargreaves, BA, Nishimura DG. Design and analysis of a practical 3D cones trajectory. Magn Reson Med. 2006;55(3):575-582.

3. Carl M, Bydder GM, Du J. UTE imaging with simultaneous water and fat signal suppression using a time-efficient multispoke inversion recovery pulse sequence. Magn Reson Med. 2016:76(2):577-582.

4. Uecker M, et al. ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA. Magn Reson Med. 2014;71(3):990-1001.

5. Noll DC, Pauly JM, Meyer CH, Nishimura DG, Macovski A. Deblurring for non-2D fourier transform magnetic resonance imaging. Magn Reson Med. 1992;25(2):319-333.

6. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. Proc IEEE Conf Comp Vision Pattern Recognition. 2016.

7. Abadi M, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. 2015.

8. McDonald J, Handbook of Biological Statistics. Baltimore: Sparky House Publishing; 2009.

9. Ahunbay E, Pipe JG. Rapid method for deblurring spiral MR images. Magn Reson Med. 2000;44(3):491-494.

Figure 1: The left
column shows the k-space radius versus readout time of the 3D cones trajectory
for various readout lengths. All images are on the same scale. The right three
columns show spatially-localized point spread functions (PSF) of the 3D cones
trajectory as a function of readout length and off-resonance. Training data was
created by inverse gridding and simulating off-resonance on autofocus corrected
reference images. From this grid, we can also see that different off-resonances
and trajectories have very different PSFs, leading to non-stationary blurring.

Figure 2: The
proposed convolutional neural network. The input is a 3D image volume with its
real and imaginary components as channels. All 3D convolution kernels are 5x5x5
and immediately followed by rectified linear units. The first layer convolves
the input to the necessary residual layer size and three residual layers are
used^{6}. The target image is an autofocus-corrected image. The network
was trained with Tensorflow^{7} with an L_{1} loss.

Figure 3: Sample
images from the four categories compared in the reading. The off-resonance blurring
is most visible in the loss of sharpness in the vessels (red arrows). The (a) internal
mammary arteries, (b) subsegmental right pulmonary arteries, and (c) hepatic and
portal veins are shown.

Figure 4: Field maps of the
(a) deep-learning-corrected and (b) autofocus-corrected images were generated
by applying off-resonance to (d) the original image and finding the closest
match with the autofocus metric. (c) The difference map shows that the two
estimates are similar and primarily differ in the estimates of fat
off-resonance. The smooth and similar field maps in (a) and (c) give confidence
that the deep learning approach is not hallucinating new structures into the
image. These field maps also make sense because fat is seen to be around 440
Hz.

Figure
5: Statistical results from two readings are shown. Readers evaluated image
quality at eight anatomic features, on a 5-point scale. (PA-pulmonary artery.
RLL-right lower lobe.) The mean scores for each feature are plotted. Above each
bar are abbreviations to denote which images each method is superior to (p<0.01)
(e.g. L: this method is superior to uncorrected long readout images). Features
with NSD (no significant difference, p>0.01) did not pass one-way ANOVA
significance. The proposed deep learning method is non-inferior to uncorrected
short readout images in all features.