Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105845
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Health Technology and Informatics-
dc.contributorSchool of Nursing-
dc.creatorYang, Den_US
dc.creatorRen, Gen_US
dc.creatorNi, Ren_US
dc.creatorHuang, YHen_US
dc.creatorLam, NFDen_US
dc.creatorSun, Hen_US
dc.creatorWan, SBNen_US
dc.creatorWong, MFEen_US
dc.creatorChan, KKen_US
dc.creatorTsang, HCHen_US
dc.creatorXu, Len_US
dc.creatorWu, TCen_US
dc.creatorKong, FMen_US
dc.creatorWáng, YXJen_US
dc.creatorQin, Jen_US
dc.creatorChan, WCLen_US
dc.creatorYing, Men_US
dc.creatorCai, Jen_US
dc.date.accessioned2024-04-23T04:31:47Z-
dc.date.available2024-04-23T04:31:47Z-
dc.identifier.issn2223-4292en_US
dc.identifier.urihttp://hdl.handle.net/10397/105845-
dc.language.isoenen_US
dc.publisherAME Publishing Companyen_US
dc.rights© Quantitative Imaging in Medicine and Surgery. All rights reserved.en_US
dc.rightsThis is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication Yang D, Ren G, Ni R, Huang YH, Lam NFD, Sun H, Wan SBN, Wong MFE, Chan KK, Tsang HCH, Xu L, Wu TC, Kong FM(, Wáng YXJ, Qin J, Chan LWC, Ying M, Cai J. Deep learning attention-guided radiomics for COVID-19 chest radiograph classification. Quant Imaging Med Surg 2023;13(2):572-584 is available at https://doi.org/10.21037/qims-22-531.en_US
dc.subjectChest radiographen_US
dc.subjectClassificationen_US
dc.subjectCoronavirus disease 2019 (COVID-19)en_US
dc.subjectDeep learningen_US
dc.subjectRadiomicsen_US
dc.titleDeep learning attention-guided radiomics for COVID-19 chest radiograph classificationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage572-
dc.identifier.epage584-
dc.identifier.volume13-
dc.identifier.issue2-
dc.identifier.doi10.21037/qims-22-531en_US
dcterms.abstractBackground: Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR).-
dcterms.abstractMethods: In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features. Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN’s attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation.-
dcterms.abstractResults: Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes’ F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19).-
dcterms.abstractConclusions: A two-step COVID-19 classification framework integrating information from both DLR and radiomics fe+atures (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationQuantitative imaging in medicine and surgery, 1 Feb. 2023, v. 13, no. 2, p. 572-584en_US
dcterms.isPartOfQuantitative imaging in medicine and surgeryen_US
dcterms.issued2023-02-01-
dc.identifier.scopus2-s2.0-85147155108-
dc.identifier.eissn2223-4306en_US
dc.description.validate202404 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextHealth and Medical Research Fund; Food and Health Bureau, Government of the Hong Kong Special Administrative Region; Shenzhen-Hong Kong-Macau S&T Program; Shenzhen Basic Research Program; Shenzhen Science and Technology Innovation Committeeen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
104948-PB9-9776-R2.pdf1.6 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

13
Citations as of Jun 30, 2024

Downloads

5
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

5
Citations as of Jul 4, 2024

WEB OF SCIENCETM
Citations

5
Citations as of Jul 4, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.