Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/93660
PIRA download icon_1.1View/Download Full Text
Title: Deep Learning-based Computed Tomography Perfusion Mapping (DL-CTPM) for pulmonary CT-to-perfusion translation
Authors: Ren, G 
Zhang, J 
Li, T 
Xiao, H 
Cheung, LY 
Ho, WY
Qin, J 
Cai, J 
Issue Date: 1-Aug-2021
Source: International journal of radiation oncology biology physics, 1 Aug. 2021, v. 110, no. 5, p. 1508-1518
Abstract: Purpose: Our purpose was to develop a deep learning–based computed tomography (CT) perfusion mapping (DL-CTPM) method that synthesizes lung perfusion images from CT images.
Methods and Materials: This paper presents a retrospective analysis of the pulmonary technetium-99m-labeled macroaggregated albumin single-photon emission CT (SPECT)/CT scans obtained from 73 patients at Queen Mary Hospital in Hong Kong in 2019. The left and right lung scans were separated to double the size of the data set to 146. A 3-dimensional attention residual neural network was constructed to extract textural features from the CT images and reconstruct corresponding functional images. Eighty-four samples were randomly selected for training and cross-validation, and the remaining 62 were used for model testing in terms of voxel-wise agreement and function-wise concordance. To assess the voxel-wise agreement, the Spearman's correlation coefficient (R) and structural similarity index measure between the images predicted by the DL-CTPM and the corresponding SPECT perfusion images were computed to assess the statistical and perceptual image similarities, respectively. To assess the function-wise concordance, the Dice similarity coefficient (DSC) was computed to determine the similarity of the low/high functional lung volumes.
Results: The evaluation of the voxel-wise agreement showed a moderate-to-high voxel value correlation (0.6733 ± 0.1728) and high structural similarity (0.7635 ± 0.0697) between the SPECT and DL-CTPM predicted perfusions. The evaluation of the function-wise concordance obtained an average DSC value of 0.8183 ± 0.0752 for high-functional lungs (range, 0.5819-0.9255) and 0.6501 ± 0.1061 for low-functional lungs (range, 0.2405-0.8212). Ninety-four percent of the test cases demonstrated high concordance (DSC >0.7) between the high-functional volumes contoured from the predicted and ground-truth perfusions.
Conclusions: We developed a novel DL-CTPM method for estimating perfusion-based lung functional images from the CT domain using a 3-dimensional attention residual neural network, which yielded moderate-to-high voxel-wise approximations of lung perfusion. To further contextualize these results toward future clinical application, a multi-institutional large-cohort study is warranted.
Publisher: Elsevier
Journal: International journal of radiation oncology biology physics 
ISSN: 0360-3016
DOI: 10.1016/j.ijrobp.2021.02.032
Rights: © 2021 Elsevier Inc. All rights reserved.
© 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/
The following publication Ren, G., Zhang, J., Li, T., Xiao, H., Cheung, L. Y., Ho, W. Y., ... & Cai, J. (2021). Deep learning-based computed tomography perfusion mapping (DL-CTPM) for pulmonary CT-to-perfusion translation. International Journal of Radiation Oncology* Biology* Physics, 110(5), 1508-1518 is available at https://doi.org/10.1016/j.ijrobp.2021.02.032.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Ren_Deep_Learning-Based_Computed.pdfPre-Published version1.3 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

51
Last Week
0
Last month
Citations as of Apr 28, 2024

Downloads

74
Citations as of Apr 28, 2024

SCOPUSTM   
Citations

21
Citations as of May 3, 2024

WEB OF SCIENCETM
Citations

17
Citations as of May 2, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.