Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105846
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Nursing-
dc.creatorShen, Hen_US
dc.creatorHe, Pen_US
dc.creatorRen, Yen_US
dc.creatorHuang, Zen_US
dc.creatorLi, Sen_US
dc.creatorWang, Gen_US
dc.creatorCong, Men_US
dc.creatorLuo, Den_US
dc.creatorShao, Den_US
dc.creatorLee, EYPen_US
dc.creatorCui, Ren_US
dc.creatorHuo, Len_US
dc.creatorQin, Jen_US
dc.creatorLiu, Jen_US
dc.creatorHu, Zen_US
dc.creatorLiu, Zen_US
dc.date.accessioned2024-04-23T04:31:48Z-
dc.date.available2024-04-23T04:31:48Z-
dc.identifier.issn2223-4292en_US
dc.identifier.urihttp://hdl.handle.net/10397/105846-
dc.language.isoenen_US
dc.publisherAME Publishing Companyen_US
dc.rights© Quantitative Imaging in Medicine and Surgery. All rights reserved.en_US
dc.rightsThis is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication Shen H, He P, Ren Y, Huang Z, Li S, Wang G, Cong M, Luo D, Shao D, Lee EYP, Cui R, Huo L, Qin J, Liu J, Hu Z, Liu Z, Zhang N. A deep learning model based on the attention mechanism for automatic segmentation of abdominal muscle and fat for body composition assessment. Quant Imaging Med Surg 2023;13(3):1384-1398 is available at https://doi.org/10.21037/qims-22-330.en_US
dc.subjectCT scanen_US
dc.subjectDeep learningen_US
dc.subjectFat segmentationen_US
dc.subjectMuscle segmentationen_US
dc.subjectSarcopeniaen_US
dc.titleA deep learning model based on the attention mechanism for automatic segmentation of abdominal muscle and fat for body composition assessmenten_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1384-
dc.identifier.epage1398-
dc.identifier.volume13-
dc.identifier.issue3-
dc.identifier.doi10.21037/qims-22-330en_US
dcterms.abstractBackground: Quantitative muscle and fat data obtained through body composition analysis are expected to be a new stable biomarker for the early and accurate prediction of treatment-related toxicity, treatment response, and prognosis in patients with lung cancer. The use of these biomarkers can enable the adjustment of individualized treatment regimens in a timely manner, which is critical to further improving patient prognosis and quality of life. We aimed to develop a deep learning model based on attention for fully automated segmentation of the abdomen from computed tomography (CT) to quantify body composition.-
dcterms.abstractMethods: A fully automatic segmentation deep learning model was designed based on the attention mechanism and using U-Net as the framework. Subcutaneous fat, skeletal muscle, and visceral fat were manually segmented by two experts to serve as ground truth labels. The performance of the model was evaluated using Dice similarity coefficients (DSCs) and Hausdorff distance at 95th percentile (HD95).-
dcterms.abstractResults: The mean DSC for subcutaneous fat and skeletal muscle were high for both the enhanced CT test set (0.93±0.06 and 0.96±0.02, respectively) and the plain CT test set (0.90±0.09 and 0.95±0.01, respectively). Nevertheless, the model did not perform well in the segmentation performance of visceral fat, especially for the enhanced CT test set. The mean DSC for the enhanced CT test set was 0.87±0.11, while the mean DSC for the plain CT test set was 0.92±0.03. We discuss the reasons for this result.-
dcterms.abstractConclusions: This work demonstrates a method for the automatic outlining of subcutaneous fat, skeletal muscle, and visceral fat areas at L3.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationQuantitative imaging in medicine and surgery, 1 Mar. 2023, v. 13, no. 3, p. 1384-1398en_US
dcterms.isPartOfQuantitative imaging in medicine and surgeryen_US
dcterms.issued2023-03-01-
dc.identifier.scopus2-s2.0-85149001593-
dc.identifier.eissn2223-4306en_US
dc.description.validate202404 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; Shenzhen Excellent Technological Innovation Talent Training Project of China; National High Level Hospital Clinical Research Funding; Shenzhen High-level Hospital Construction Funden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
109519-PB3-9299-R2.pdf3.19 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

17
Citations as of Jun 30, 2024

Downloads

4
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

5
Citations as of Jul 4, 2024

WEB OF SCIENCETM
Citations

3
Citations as of Jul 4, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.