A three-dimensional dual-domain deep network for high-pitch and sparse helical CT reconstruction
Abstract
In this paper, we propose a new GPU implementation of the Katsevich algorithm for helical CT reconstruction. Our implementation divides the sinograms and reconstructs the CT images pitch by pitch. By utilizing the periodic properties of the parameters of the Katsevich algorithm, our method only needs to calculate these parameters once for all the pitches and so has lower GPU-memory burdens and is very suitable for deep learning. By embedding our implementation into the network, we propose an end-to-end deep network for the high pitch helical CT reconstruction with sparse detectors. Since our network utilizes the features extracted from both sinograms and CT images, it can simultaneously reduce the streak artifacts caused by the sparsity of sinograms and preserve fine details in the CT images. Experiments show that our network outperforms the related methods both in subjective and objective evaluations.
Index Terms:
Helical CT, sparse CT, deep learning, Katsevich algorithm, high pitchI Introduction
Computed tomography (CT) has been an important tool in medical diagnosis and the helical scan with multi-row detector is the most used scanning modality in hospitals. When scanning, the X-ray source and detector rotate around the central axis (i.e., the z-direction) while the table with a patient remaining stationary on it is translated along the z-direction at a constant velocity. Therefore, the trajectory of the X-ray source relative to the patient forms a helical curve, where the pitch of the helical curve is related to the moving velocity of the table.
The patient needs to stay stationary during the scanning process, otherwise severe artifacts will be caused in the reconstructed CT images. Thus, the high pitch scanning that can complete the data acquiring in small time has many merits and is very desirable in clinical diagnosis. For example, by utilizing the high pitch helical CT, the whole lung can be scanned within a very small interval and so the patient needs not to hold the breath. This is very useful for patients that can’t perform breath-holding for a long time, such as infant, old patients and unconscious patients. For cardiac CT, scanning the whole heart in one heart beat is a reasonable request.
In [1], Noo et al. used the Tam-Danielsson window [2][3] and Katsevich’s inverse formula [4] to derive the numbers of required detector rows for perfectly reconstructing the helical CT images with a curved detector or a flat detector. When other parameters are fixed, the higher the pitch is, the greater number of the detector rows is needed. However, as the number of the detector rows increasing, the price of the CT scanner will also rise significantly. One way to reduce the price is to use sparse detector [5] with fixed detector row numbers. However, CT images reconstructed from sparse sinograms usually have streak artifacts and lose fine details especially when the scanning pitch is high.
In the literature, many algorithms were proposed to reconstruct the helical CT images. These algorithms can be mainly categorized into four classes: (a) rebinning algorithms, (b) FDK-like algorithms, (c) model-based iterative methods, (d) exact algorithms.
The rebinning algorithms [6][7] first converted the measured helical sinograms to 2D fan-beam or parallel-beam sinograms and then reconstructed the CT images by using any 2D reconstruction methods. To reduce the interpolation error and accurately estimate the 2D sinogram, the sampling rate of the helical scan is usually high.
The FDK-like algorithms [8][9][10] are extended from the circle cone-beam reconstruction algorithm proposed by Feldkamp, Davis and Kress [11]. The FDK-like algorithms usually involve three steps: 1) filtering the sinograms, 2) weighting the filtered data, 3) Backprojecting the weighted data. By utilizing the fast Fourier transform (FFT), the FDK-like algorithms can be implemented efficiently. However, since the FDK-like algorithms were proposed heuristically and lack of theory supports, the artifacts in the reconstruction is usually hard to avoided especially when the cone angles are large.
Model-based methods usually first propose the minimizing functionals by combining the statistical property of the sinogram with different priori hypotheses, and then convert the functionals to iterative reconstruction algorithms by using optimizing methods. In the literature, total variation (TV) [12][13], nonlocal means (NLM) [14], dictionary learning [15][16] are the commonly used priori hypotheses. In [7], Stierstorfer et al. proposed an iterative weighted filtered backprojection (WFBP) method to reconstruct helical CT images. In [17], Sunnegrdh proposed an improved regularized iterative weighted filtered backprojection (RIWFBP) method for the helical cone-beam CT. In [18], Yu and Zeng developed a fast TV-based iterative reconstruction algorithm for the limited-angle helical cone-beam CT reconstruction. Model-based iterative methods are very suitable for ill-posed problems, such as limited angle or sparse angle CT reconstructions. But the computational burdens of the model-based iterative methods are too high to reconstruct the images in real time especially for 3D helical CT.
Exact helical CT reconstruction can be obtained by using Katsevich’s algorithms [19][4], Zou and Pan’s PI-line algorithms [20][21]. These algorithms have mathematical inverse formulas for helical CT reconstructions and so are theoretically exact. In [22][23], by extending Katsevich’s algorithms and the PI-line algorithms, the exact methods for cone-beam CT reconstructions with general scanning curves were proposed. In [24], Yan et al. accelerated the implementation of the Katsevich algorithm [4] by utilizing the graphics processing unit (GPU). In their implementation, the Tam–Danielsson window rather than the PI line is used to determine whether a sinogram corresponding to a scanning angle needs to be backprojected to reconstruct a slice of CT image. Their method doesn’t utilize the periodic properties of the PI-line and so needs to calculate the backprojection positions on the detector for pixels in all the CT images. Exact helical algorithms can reconstruct good quality CT images even if the helical pitch is high. But when the sinograms are sparse, streak artifacts in the CT images reconstructed by the exact algorithms can’t be avoided.
Recently, convolutional neural network (CNN) based methods have also been used for CT reconstructions. These methods can be roughly divided into three categories. The first category uses CNNs only to pre-process the sinograms [25][26] or post-process the CT images [27][28][29]. This type of methods needs an extra step to convert the sinograms to CT images. The second category utilizes end-to-end CNNs to reconstruct CT images directly from measured sinograms [30][31][32][5][33]. Since the reconstruction algorithm is modelled as network layers embedded in CNNS, no extra step is needed when testing or deploying these trained CNNs. However, embedding reconstruction algorithm in CNNs is GPU-memory expensive and may result in GPU-memory exhausting error when training this type of CNNs especially for 3D CT. The third category uses CNNs to approximate the solution of the unroll iterative algorithms [34][35], where each iteration of the algorithms was approximated by a subnetwork. This type of algorithms exhausts more GPU-memory than those in the second category since in the networks the operators of projection and backprojection need to be performed times, where is the iteration numbers.
In this paper, we focus on embedding the Katsevich algorithm [4] into CNNs to propose an end-to-end deep network to reconstruct helical CT images with high pitch and sparsely spaced detectors. Due to the limited GPU-memory, to the best of our knowledge, in the literature no exact algorithm (Katsevich’s algorithms or PI-line algorithms) was embedded in CNNs to reconstruct helical CT images. In [5], an end-to-end deep network was proposed for helical CT, but the sinograms measured from helical scans were converted to 2D fan-beam sinograms and the embedded reconstruction algorithm was also for fan-beam geometry. As the helical pitch increases, the image volume involved to generate the rebinned fan-beam sinogram and the error of the estimated fan-beam sinogram will both increase. Therefore, for helical CT with high pitch, the CNN using rebining methodology may not work well. The Katsevich algorithm [4] can reconstruct good CT images even if the helical pitch is high. Our network uses Katsevich algorithm as the domain transfer layer and so can also reconstruct good CT images directly from high pitch helical sinograms. Moreover, because our network utilizes the features extracted from both sinograms and CT images, it can simultaneously reduce the streak artifacts caused by the sparsity of sinograms and reconstruct fine details in the CT images.
The sinogram measured from the helical scan usually has large size in the z-direction. If directly input the whole sinogram into CNN to reconstruct all the CT images, huge GPU-memory is needed. Besides, most of the parameters of Katsevich algorithm [4] depend on the position of the CT images and computing them is also time consuming and GPU-memory consuming. By observing that the parameters of Katsevich algorithm [4] are periodic, we divide the sinograms into parts and reconstruct the CT images cycle-by-cycle. By this way, the parameters of Katsevich algorithm for all cycles can be pre-calculated and the GPU-memory loading can be reduced.
The rest of the paper is organized as follows. Section II gives our new GPU implementation of the Katsevich algorithm [4]. Section III describes the detailed structure of our proposed network. Section IV presents the experimental results. Discussion and conclusion are given in Section V.
II GPU implementation of Katsevich Algorithm
In this section, we give a new GPU implementation of the Katsevich algorithm [4], which reconstructs the helical CT images pitch by pitch and has low computational and GPU-memory burdens. Our implementation of Katsevich algorithm [4] is very suitable for deep learning since its paramerters are the same for all pitches and so need only to calculate once.
Katsevich [4] proposed an exact reconstruction formula for helical cone-beam CT and Noo et al. [1] researched how to implement it efficiently and accurately. Let
(1) |
be the helical trajectory formed by the X-ray source position with radius and pitch , where
(2) |
is the start point of the helix,
(3) |
is a cylinder inside the helix, and .
Then, for any , the reconstruction formula can be written as
(4) |
where and are the extremities of the -line passing through with .
(5) |
(6) |
(7) |
is the Hilbert filter, is the measured projection data at X-ray source position in the direction , vector is normal to the -plane of the smallest value that contains the line of direction through and the -plane is any 2D plane that has three intersections , and with the helix, where . In [1], to numerically calculate , the integral over on the -plane with the minimum value is converted to the line integral along the curve that the -plane intersects with the detector plane.
Let be the sinogram measured from the helical scan by curved detector, then the implementation of (4) in [1] is described as follows:
-
1.
Taking the derivative at constant direction via chain rule:
(8) -
2.
Length-correction weighting:
(9) -
3.
Forward height rebinning: computing
(10) by interpolation for all , where is the half fan angle,
(11) - 4.
-
5.
Backward height rebinning: compute
(13) where is angle of the smallest absolute value that satisfies the equation
(14) -
6.
Post-cosine weighting:
(15) -
7.
Backprojection:
(16) where
(17) (18) (19)
In the above steps, the parameters , , , , , and need to calculate. The parameters and can be pre-calculated only once but the others , , , , need to be calculated for every , which is time consuming. Luckily, these parameters have the following periodic properties:
(20) | ||||
where , is an integer and is the pitch. Thus, if we reconstruct the CT images in the z-direction pitch by pitch, the parameters , , , , can also be pre-calculated only once, which can reduce the computation and GPU-memory burdens a lot.
In the following, we describe the detailed GPU implementation scheme of the reconstruction formula (4), which reconstruct the CT images pitch by pitch. In our scheme, to reduce the GPU-memory consuming, the backprojection step is performed slice by slice in the z-direction while the other steps are performed on the whole sinogram needed to reconstruct the CT images of one pitch.
Let be the -th sliced CT image of the -th pitch in the z-direction, where , , is the total number of CT images to be reconstruct in one pitch, is the number of divided pitches of the scanning helix, , is a point of the mesh grids formed by the x-coordinate and y-coordinate and is the relative position on the z-coordinate in one pitch. Then our scheme can be divided into parameters pre-calculating step and reconstruction step.
The pre-calculating step is described as follows:
- 1.
-
2.
Define , For , pre-calculate the extremities and of the -line passing through using the algorithm described in [36]. Note that an extra projection step that constrains on the interval needs to perform at each iteration of the algorithm, which was not explicitly stated in [36]. Also note that here and are two matrices corresponding to the mesh grid points of . After obtaining and , we can calculate the minimal and maximal values of needed to reconstruct :
(21) -
3.
For , pre-calculate , and :
(22) (23) (24) where and . Here , and are three tensors of dimension three, where the first dimension corresponds to .
Note that in the above step 3), since
(25) | ||||
for any positive integer , , and can be pre-calculated only once.
After the pre-calculating step, we can compute the minimal and maximal values of , and needed to reconstruct the CT images of one pitch. These values can be calculated by
(26) | ||||
The values , , and can be used to determine the numbers of rows and columns of the detector, and and are used to determine the required sinogram to reconstruct the CT images of one pitch. To reconstruct the -th pitch CT images, the sinogram corresponding to is needed.
Let be the -sliced sinogram from , where for . Then the reconstruction step can be described as follows:
- 1.
-
2.
For , backproject slice by slice:
(27) where
(28)
To reconstruct the helical CT images pitch by pitch, we need to perform the pre-calculating step once and store its outputs in memory, and the reconstruction step times for pitches. In this paper, the pre-calculating step was implemented by the central processing unit (CPU) while the reconstruction step by GPU.
III Proposed Network
In this section, we propose a 3D deep learning network to reconstruct CT images directly from sparse helical sinograms with high pitch, where our GPU implementation of the Katsevich algorithm [4] is used as the domain transfer layer in the network.
III-A Architecture of Proposed Network
Our network (shown in Fig. 1) is composed of three parts, the sinogram domain subnetwork, the domain transfer layer and the CT image domain subnetwork, where the sinogram domain subnetwork is used to denoise and upsample the sparse sinograms, the domain transfer layer is used to reconstruct the CT images from the sinograms output by the sinogram domain subnetwork, the CT image domain subnetwork is used to denoise CT images reconstructed by the domain transfer layer.

The architecture of the sinogram domain subnetwork is based on the Resnet [37]. In detail, the sinogram domain subnetwork is composed of 7 cascaded ’Conv3D+PReLU’ blocks and followed by a ’Conv3D’ block, where ’Conv3D’ is the 3D convolutional layer and ’PReLu’ is the activate function, parametric rectified linear unit (PReLU). The feature channels of 3D convolutional filters in ’Conv3D+PReLU’ blocks are all 16 and 1 in the ’Conv3D’ block, and the widths of the filters in ’Conv3D+PReLU’ blocks and ’Conv3D’ block are all 3. Each output of the ’Conv3D+PReLU’ block is skipped-connected with the output of the last ’Conv3D+PReLU’ block and the input of the sinogram domain subnetwork is skipped-connected with the output of the ’Conv3D’ block. The input to the sinogram domain subnetwork is the sliced 3D sinogram () that can exactly reconstruct one pitch helical CT images. The output of the sinogram domain subnetwork is the denoised sinogram which has the same size of the input sinogram.
The domain transfer layer implements the Katsevich algorithm [4] as described in Section II. The input to the domain transfer layer is the denoised sinograms output by the sinogram domain subnetwork and the output is the reconstructed helical CT images of one pitch by the Katsevich algorithm [4].
The CT image domain subnetwork has the same architecture of the sinogram domain subnetwork, but the inputs and outputs are different. The input to the CT image domain subnetwork is the CT images of one pitch output by the domain transfer layer and the output is the denosied CT images which has the same size as the input.
III-B Loss Function
Let be the outputs of our network and be the corresponding labels, where and are, respectively, the denoised sinogram and label sinogram, and and are, respectively, the denoised CT image and label CT image. Then the loss function used to train our network is
(29) |
where is the total number of training samples and is the set of parameters of our network to be learned.
Our network is trained by optimizing the loss function (29) using the backpropagation algorithm (BP) [38]. In this study, the loss function is optimized by the Adam algorithm [39], where the learning rate is set as .
The training code of our network is implemented by using the software Tensorflow 2.5 on a Dell PowerEdge T640 server with a Ubuntu 20.04 operating system, 377GB random access memory (RAM), a Intel(R) Xeon(R) Silver 4210R CPU with 2.4GHz frequency and a NVIDIA RTX A6000 GPU card with 45631MB memory. The initial values of the training parameters of our network are randomly set by Tensorflow using the default setup. The gradients of the loss function with respect to the training parameters are calculated by Tensorflow automatically using the automatic differentiation technique. The runtime of our network to complete one batch learning (a forward step and a backward step) with is about seconds.
III-C Data Preparation
III-C1 CT image labels
The CT images from “the 2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge” [40] that was established and authorized by Mayo Clinics are used as the ground truth CT images . The AAPM challenge has also released the 3D helical sinograms corresponding to the ground truth CT images. But the pitches of the helical scan used to obtain the sinograms are low and not equal, so we need to generate the simulated sinograms with constant high pitch by performing the helical projection.
In this paper, we perform our methods on two types of helical sinograms, which correspond to and , respectively.
III-C2 Parameters of the scanning geometry
If not specified, all the length unit in our paper is millimetre (). The radius of scanning helix is and the distance of the X-ray source to the detector is , where the detector is a curved panel. The x-coordinate and y-coordinate of the ground truth CT images are, resectively
(30) | ||||
For simplify, here we assume the size of one pixel of the CT images is . The half fan-angle is
(31) |
and we set
(32) | ||||
for the -coordinate of the detector.
We constraint the number of detector rows to be 16 and so the positions of the detector units distributed on the w-coordinate are different for different . For , the max in equation (26) is
(33) |
and so . Therefore, we set
(34) |
for on the -coordinate. Similarly, for , we have
(35) |
and so and set
(36) |
In summary, the scanning parameters of the helical CT are listed in Table I.
III-C3 Data generation
We use the above described geometry to simulate scanning the CT image labels to generate the sinogram labels , where the slices of the ground truth CT images are equally spaced on the z-coordinate. For , the z-coordinate of the ground truth CT images on one pitch is
(37) | ||||
and for is
(38) | ||||
Therefore, in one pitch, we need to reconstruct slices of CT images for and slices for .
The sinogram is obtained by scanning a whole patient, so we need to divide it into pitches to obtain , where
(39) | ||||
, and are defined in equation (26), and is the start angle of the scanning helix. For , , , and for , , . To avoid the zero values, the data of the first and last pitches are discarded.
Correspondingly, the ground truth CT images are also needed to divide into pitches to get , where
(40) | ||||
, for and for .
After obtaining the sinogram labels , we downsample it on the -coordinate at
(41) |
to obtain the sparse sinogram . We then upsample by interpolation to make it have the same size as and add the ’Gaussian+Poisson’ noise [41] to the upsampled sinogram to obtain the input sinograms of our network, where
(42) | ||||
is the maximal value of , is the photon count, and are the mean and variance of the Gaussian noise, respectively.
The CT image reconstructed from is used as the input of the compared post-processing methods.
Radius of helix | 595 |
---|---|
Distance of source to detector | 1085.6 |
-coordinate of CT image | |
-coordinate of CT image | |
-coordinate of CT image in one pitch | |
-coordinate in one pitch | |
-coordinate of detector | |
-coordinate of detector | |
IV Experimental Results
In this section, we present some simulated experimental results to demonstrate the effectiveness of our method and compare it with the related methods, the nonlocal-TV iterative algorithm [42] and the post-processing method FBPConvNet [43], where the convolutional filters in FBPConvNet are extended from 2D to 3D.
As described in Subsection III-C, the CT images in “the 2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge” [40] are used to prepare the experimental data, where the data generated from six patients named ’L192’, ’L286’, ’L291’, ’L310’, ’L333’ and ’L506’ are used as the training data and those from the other four patients named ’L067’, ’L096’, ’L109’ and ’L143’ are used as the test data. The training data involve totally 345 pitches (of size ) for and 260 pitches (of size ) for and the test data involve 215 pitches for and 163 pitches for , where of the training data is randomly chosen as the validation. Note that since the last CT image of one pitch is override with the first CT image of its next pitch, we exclude the last CT image of every pitch in our training and testing data.
For nonlocal-TV iterative algorithm, we tune its parameters empirically in order to obtain the best results. We use the default parameters of FBPConvNet to train it, where the number of training epoches is 100. The CT images reconstructed from the noisy sinograms by the Katsevich algorithm are used as the inputs of FBPConvNet, where the batch size is . We train our network epoches with batch size . The size of the learnable parameters of FBPConvNet is about 12.4MB while ours is about 2.3MB.
IV-A Experimental Results for
In Fig. 2, five slices of CT images reconstructed from the test data for are shown. We can observe that the images reconstructed by Katsevich algorithm suffer severe streak artifacts due to the sparsity of the sinogram. The images reconstructed by nonlocal-TV are somewhat over-smoothed and a lot of details are lost. FBPConvNet can reconstruct CT images with some details, but some boundaries in its reconstructed CT images are still blurred. Compared to nonlocal-TV and FBPConvNet, the CT images reconstructed by our method can reconstruct more details and boundaries as shown in Fig. 2e. To better observe the differences, the areas marked by the red rectangles in the reconstructed images are enlarged and presented on the left-down positions.
Pitch | Method | RMSE | SSIM |
---|---|---|---|
Katsevich | 97.6167.549 | 0.7770.028 | |
Nonlocal-TV | 93.1747.271 | 0.8150.028 | |
FBPConvNet | 70.3298.169 | 0.8450.028 | |
Ours | 59.0687.028 | 0.8670.027 | |
Katsevich | 97.7837.493 | 0.7760.028 | |
Nonlocal-TV | 93.3627.217 | 0.8140.028 | |
FBPConvNet | 71.6098.140 | 0.8420.028 | |
Ours | 60.0286.988 | 0.8650.027 |
To quantitatively evaluate the performances of the compared methods, two metrics, the root-mean-square-error (RMSE) and the structural-similarity (SSIM) are used to measure the similarity of the reconstructed images and the labels. The definition of RMSE is
(43) |
where and are, respectively, the reconstructed CT image and label image, is the number of pixels. The value of SSIM is defined as
(44) |
where
(45) | ||||
, are the mean and variance of the reconstructed CT image , , are the mean and variance of the referenced image , respectively, is the covariance between and ,
In Table II, the average RMSE and SSIM of the CT images reconstructed by the four methods are listed. It can be observed that our network gains the lowest RMSE and highest SSIM in average, which coincides with our subjective evaluation.
To test the generalization of our method, we perform the reconstruction of our network and FBPConvNet on another type of CT, the head and neck CT, where the head and neck CT dataset [44] is used to simulate the test data, and our network and FBPConvNet are trained by the NIH-AAPM-Mayo data only. Some reconstructed results of the head and neck CT are presented in Fig. 3. We can observe that our method can reconstruct more fine details and boundaries than FBPConvNet, which demonstrates that our method has good generalization capability. The average RMSE and SSIM of the reconstructed head and neck CT images are listed in Table III. It can be observed that our network has lower RMSE and higher SSIM than FBPConvNet in average.
Pitch | Method | RMSE | SSIM |
---|---|---|---|
Katsevich | 76.65712.371 | 0.8630.031 | |
FBPConvNet | 64.63312.058 | 0.9390.009 | |
Ours | 47.8079.870 | 0.9590.006 | |
Katsevich | 77.10412.473 | 0.8610.031 | |
FBPConvNet | 65.42712.627 | 0.9370.010 | |
Ours | 49.35710.729 | 0.9550.009 |
IV-B Experimental Results for
In this subsection, We present some experimental results to show the performance of our network for pitch .
In Fig. 5, we present five CT images reconstructed by Katsevich algorithm, Nonlocal-TV, FBPConvNet and our method from the NIH-AAPM-Mayo test data. We can observe that the images reconstructed by Katsevich algorithm suffer from severe streak artifacts. Nonlocal-TV tends to over-smooth the reconstructed CT images. Some fine details and boundaries in the CT images reconstructed by FBPConvNet are lost. Compared to these methods, our method can simultaneously suppress the streak artifacts and reconstruct fine details and good boundaries. In Table II, we list the average RMSE and PSNR of the CT images reconstructed by the four methods, from which we can observe that our method has the lowest RMSE and highest PSNR in average. We can also observe that the average RMSE and PSNR of each method for and vary only a little. Since our network uses Katsevich algorithm to transfer sinograms to CT images and the inputs to FBPConvNet are also the CT images reconstructed by Katsevich algorithm, it demonstrates that Katsevich algorithm is robust to the helical pitch .
We also test the generalization capabilities of FBPConvNet and our network for . The CT images reconstructed by FBPConvNet and our network from the head and neck test data for are shown in Fig. 5. We can see that the images reconstructed by FBPConvNet are somewhat blurry and lose some fine details while our method can reconstruct CT images with better boundaries and details. The average RMSE and SSIM of the reconstructed CT images are also listed in Table III. We can see that our method gains lower RMSE and higher SSIM than FBPConvNet in average.
V Discussion and Conclusion
In this paper, we developed a new GPU implementation of the Katsevich algorithm [4] for helical CT, which reconstructs the CT images pitch by pitch. By utilizing the periodic properties of the parameters of the Katsevich algorithm, our scheme only needs to calculate these parameters once for all pitches and so is very suitable for deep learning. By embedding our implementation of the Katsevich algorithm into the CNN, we proposed an end-to-end deep network for the high pitch helical CT reconstruction with sparse detectors. Experiments show that our end-to-end deep network outperformed the Nonlocal-TV based iterative algorithm and the post-processed deep network FBPConvNet both in subjective and objective evaluations.
One limitation of our method is that it requires the pitch of the helical scan is constant, which inherits from the Katsevich algorithm. In the literature, there exist some exact algorithms for the helical CT reconstruction with variable pitches. However, the periodic properties of the parameters of these algorithms may not hold. Therefore, we need to compute the parameters for every slice when using these algorithms to reconstruct CT images, which will consume a lot of GPU-memory and isn’t fit for deep learning. In our future work, we will research how to solve this issue.
References
- [1] F. Noo, J. Pack, and D. Heuscher, “Exact helical reconstruction using native cone-beam geometries,” Physics in Medicine & Biology, vol. 48, no. 23, pp. 3787–3818, nov 2003.
- [2] K. C. Tam, S. Samarasekera, and F. Sauer, “Exact cone beam CT with a spiral scan,” Physics in Medicine & Biology, vol. 43, no. 4, pp. 1015–1024, apr 1998.
- [3] P. Danielsson, P. Edholm, J. Eriksson, and M. Magnusson Seger, “Towards exact reconstruction for helical cone-beam scanning of long objects. a new detector arrangement and a new completeness condition,” in Proc. 1997 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pittsburgh, PA), D. Townsend and P. Kinahan, Eds., 1997, pp. 141–144.
- [4] A. Katsevich, “An improved exact filtered backprojection algorithm for spiral computed tomography,” Advances in Applied Mathematics, vol. 32, no. 4, pp. 681–697, May 2004.
- [5] A. Zheng, H. Gao, L. Zhang, and Y. Xing, “A dual-domain deep learning-based reconstruction method for fully 3d sparse data helical CT,” Physics in Medicine & Biology, vol. 65, no. 24, p. 245030, Dec. 2020.
- [6] F. Noo, M. Defrise, and R. Clackdoyle, “Single-slice rebinning method for helical cone-beam CT,” Physics in Medicine & Biology, vol. 44, no. 2, pp. 561–570, Jan. 1999.
- [7] K. Stierstorfer, A. Rauscher, J. Boese, H. Bruder, S. Schaller, and T. Flohr, “Weighted fbp–a simple approximate 3d fbp algorithm for multislice spiral CT with good dose usage for arbitrary pitch,” Physics in Medicine & Biology, vol. 49, no. 11, pp. 2209–2218, May 2004.
- [8] G. Wang, T. . H. Lin, P. Cheng, and D. M. Shinozaki, “A general cone-beam reconstruction algorithm,” IEEE Transactions on Medical Imaging, vol. 12, no. 3, pp. 486–496, 1993.
- [9] J. Guo, L. Zeng, and X. Zou, “An improved half-covered helical cone-beam CT reconstruction algorithm based on localized reconstruction filter,” Journal of X-Ray Science and Technology, vol. 19, no. 3, pp. 293–312, 2011.
- [10] J. Zhao, Y. Lu, Y. Jin, E. Bai, and G. Wang, “Feldkamp-type reconstruction algorithms for spiral cone-beam CT with variable pitch,” Journal of X-Ray Science and Technology, vol. 15, no. 4, pp. 177–196, 2007.
- [11] L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” Journal of the Optical Society of America A, vol. 1, no. 6, pp. 612–619, Jun 1984.
- [12] E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Physics in Medicine & Biology, vol. 53, no. 17, pp. 4777–4807, Aug. 2008.
- [13] Y. Liu, J. Ma, Y. Fan, and Z. Liang, “Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction,” Physics in Medicine & Biology, vol. 57, no. 23, pp. 7923–7956, Nov. 2012.
- [14] Y. Chen, D. Gao, C. Nie, L. Luo, W. Chen, X. Yin, and Y. Lin, “Bayesian statistical reconstruction for low-dose x-ray computed tomography using an adaptive-weighting nonlocal prior,” Computerized Medical Imaging and Graphics, vol. 33, no. 7, pp. 495–500, Oct. 2009.
- [15] Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-dose x-ray CT reconstruction via dictionary learning,” IEEE Transactions on Medical Imaging, vol. 31, no. 9, pp. 1682–1697, 2012.
- [16] P. Bao, W. Xia, K. Yang, W. Chen, M. Chen, Y. Xi, S. Niu, J. Zhou, H. Zhang, H. Sun, Z. Wang, and Y. Zhang, “Convolutional sparse coding for compressed sensing CT reconstruction,” IEEE Transactions on Medical Imaging, vol. 38, no. 11, pp. 2607–2619, 2019.
- [17] J. Sunnegårdh and P.-E. Danielsson, “Regularized iterative weighted filtered backprojection for helical cone-beam CT,” Medical Physics, vol. 35, no. 9, pp. 4173–4185, Nov. 2008.
- [18] W. Yu and L. Zeng, “Iterative image reconstruction for limited-angle inverse helical cone-beam computed tomography,” Scanning, vol. 38, no. 1, pp. 4–13, Nov. 2016.
- [19] A. Katsevich, “Theoretically exact filtered backprojection-type inversion algorithm for spiral CT,” SIAM Journal on Applied Mathematics, vol. 62, no. 6, pp. 2012–2026, Nov. 2002.
- [20] Y. Zou and X. Pan, “Image reconstruction on pi-lines by use of filtered backprojection in helical cone-beam CT,” Physics in Medicine & Biology, vol. 49, no. 12, pp. 2717–2731, Jun. 2004.
- [21] ——, “Exact image reconstruction on pi-lines from minimum data in helical cone-beam CT,” Physics in Medicine & Biology, vol. 49, no. 6, pp. 941–959, Feb. 2004.
- [22] Y. Ye and G. Wang, “Filtered backprojection formula for exact image reconstruction from cone-beam data along a general scanning curve,” Medical Physics, vol. 32, no. 1, pp. 42–48, Nov. 2005.
- [23] Y. Zou, X. Pan, D. Xia, and G. Wang, “Pi-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch,” Medical Physics, vol. 32, no. 8, pp. 2639–2648, Nov. 2005.
- [24] G. Yan, J. Tian, S. Zhu, C. Qin, Y. Dai, F. Yang, D. Dong, and P. Wu, “Fast katsevich algorithm based on gpu for helical cone-beam computed tomography,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 4, pp. 1053–1061, 2010.
- [25] H. Lee, J. Lee, H. Kim, B. Cho, and S. Cho, “Deep-neural-network-based sinogram synthesis for sparse-view CT image reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 3, no. 2, pp. 109–119, 2019.
- [26] J. Fu, J. Dong, and F. Zhao, “A deep learning reconstruction framework for differential phase-contrast computed tomography with incomplete data,” IEEE Transactions on Image Processing, vol. 29, pp. 2190–2202, 2020.
- [27] H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-dose CT with a residual encoder-decoder convolutional neural network,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2524–2535, 2017.
- [28] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
- [29] Z. Zhang, X. Liang, X. Dong, Y. Xie, and G. Cao, “A sparse-view CT reconstruction method based on combination of densenet and deconvolution,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1407–1417, 2018.
- [30] Y. Ge, T. Su, J. Zhu, X. Deng, Q. Zhang, J. Chen, Z. Hu, H. Zheng, and D. Liang, “Adaptive-net: deep computed tomography reconstruction network with analytical domain transformation knowledge,” Quantitative Imaging in Medicine and Surgery, vol. 10, no. 2, pp. 415–427, 2020.
- [31] W. Lin, H. Liao, C. Peng, X. Sun, J. Zhang, J. Luo, R. Chellappa, and S. K. Zhou, “Dudonet: Dual domain network for CT metal artifact reduction,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15-2, pp. 10 504–10 513.
- [32] Q. Zhang, Z. Hu, C. Jiang, H. Zheng, Y. Ge, and D. Liang, “Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging,” Physics in Medicine & Biology, vol. 65, no. 15, p. 155010, Aug. 2020.
- [33] W. Wang, X. G. Xia, C. He, Z. Ren, J. Lu, T. Wang, and B. Lei, “An end-to-end deep network for reconstructing CT images directly from sparse sinograms,” IEEE Transactions on Computational Imaging, vol. 6, pp. 1548–1560, 2020.
- [34] H. Chen, Y. Zhang, Y. Chen, J. Zhang, W. Zhang, H. Sun, Y. Lv, P. Liao, J. Zhou, and G. Wang, “Learn: Learned experts’ assessment-based reconstruction network for sparse-data CT,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1333–1347, 2018.
- [35] J. He, Y. Yang, Y. Wang, D. Zeng, Z. Bian, H. Zhang, J. Sun, Z. Xu, and J. Ma, “Optimizing a parameterized plug-and-play admm for iterative low-dose CT reconstruction,” IEEE Transactions on Medical Imaging, vol. 38, no. 2, pp. 371–382, 2019.
- [36] S. Izen, “A fast algorithm to compute the -line through points inside a helix cylinder,” Proceedings of The American Mathematical Society, vol. 135, pp. 269–276, 07 2006.
- [37] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
- [38] R. Hecht-Nielsen, “Theory of the Backpropagation Neural Network,” in IJCNN: International Joint Conference on Neural Networks, 1989, pp. 593–605.
- [39] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Preprint, arXiv:https://arxiv.org/abs/1412.6980v8, 2014.
- [40] C. McCollough, “TU-FG-207A-04: Overview of the Low Dose CT Grand Challenge,” Medical Physics, vol. 43, no. 3759-3760, pp. 3759–3760, 2016.
- [41] Y. Liu, J. Ma, Y. Fan, and Z. Liang, “Adaptive-Weighted Total Variation Minimization for Sparse Data Toward Low-Dose X-Ray Computed Tomography Image Reconstruction,” Physics in Medicine & Biology, vol. 57, no. 23, pp. 7923–7956, 2012.
- [42] J. Zhang, S. Liu, R. Xiong, S. Ma, and D. Zhao, “Improved total variation based image compressive sensing recovery by nonlocal regularization,” in IEEE International Symposium on Circuits and Systems (ISCAS), 2013, pp. 2836–2839.
- [43] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep Convolutional Neural Network for Inverse Problems in Imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
- [44] T. Bejarano, M. De Ornelas-Couto, and I. B. Mihaylov, “Longitudinal fan-beam computed tomography dataset for head-and-neck squamous cell carcinoma patients,” Medical Physics, vol. 46, no. 5, pp. 2526–2537, 2019.