Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/105845
Title: | Deep learning attention-guided radiomics for COVID-19 chest radiograph classification | Authors: | Yang, D Ren, G Ni, R Huang, YH Lam, NFD Sun, H Wan, SBN Wong, MFE Chan, KK Tsang, HCH Xu, L Wu, TC Kong, FM Wáng, YXJ Qin, J Chan, WCL Ying, M Cai, J |
Issue Date: | 1-Feb-2023 | Source: | Quantitative imaging in medicine and surgery, 1 Feb. 2023, v. 13, no. 2, p. 572-584 | Abstract: | Background: Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR). Methods: In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features. Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN’s attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation. Results: Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes’ F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19). Conclusions: A two-step COVID-19 classification framework integrating information from both DLR and radiomics fe+atures (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features. |
Keywords: | Chest radiograph Classification Coronavirus disease 2019 (COVID-19) Deep learning Radiomics |
Publisher: | AME Publishing Company | Journal: | Quantitative imaging in medicine and surgery | ISSN: | 2223-4292 | EISSN: | 2223-4306 | DOI: | 10.21037/qims-22-531 | Rights: | © Quantitative Imaging in Medicine and Surgery. All rights reserved. This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. The following publication Yang D, Ren G, Ni R, Huang YH, Lam NFD, Sun H, Wan SBN, Wong MFE, Chan KK, Tsang HCH, Xu L, Wu TC, Kong FM(, Wáng YXJ, Qin J, Chan LWC, Ying M, Cai J. Deep learning attention-guided radiomics for COVID-19 chest radiograph classification. Quant Imaging Med Surg 2023;13(2):572-584 is available at https://doi.org/10.21037/qims-22-531. |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
104948-PB9-9776-R2.pdf | 1.6 MB | Adobe PDF | View/Open |
Page views
13
Citations as of Jun 30, 2024
Downloads
5
Citations as of Jun 30, 2024
SCOPUSTM
Citations
5
Citations as of Jul 4, 2024
WEB OF SCIENCETM
Citations
5
Citations as of Jul 4, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.